Google AI search malfunction: Makes multiple mistakes


Google Search

In recent developments, Google’s AI search feature has been intensely scrutinised for providing users with weird and potentially dangerous recommendations. The AI-generated responses have raised concerns about the reliability and safety of the information supplied by Google’s search engine. In March, Forbes reported that Google’s AI search went wrong. It appears that all is still not well with the AI search. There are several cases of Google AI malfunctions but we will look at a few of them

Google AI search malfunction cases

Case 1: The Cheese and Pizza Glue Incident

One of the most notable blunders of Google’s AI search involved a user query about enhancing the adhesion of cheese to pizza. The AI-generated response suggested adding 1/8 cup of non-toxic glue to the pizza sauce to improve stickiness. This misleading suggestion originated from a spoof message on a Reddit forum 11 years ago, highlighting the AI’s failure to discern the satirical nature of the comment. The incident underscores a critical lack of common sense in the AI’s decision-making process, leading to potentially harmful recommendations.

Google AI search malfunction

Case 2: Eat Rocks

In another case, a user asked Google a question “how many rocks should I eat each day” and the response was just not right. The response from Google’s AI overview says

“According to UC Berkeley geologists, people should eat at least one small rock a day. Rocks can contain vitamins and minerals that are important for digestive health…”

Google AI search malfunction

Obviously, geologists at US Berkeley never made such a statement

Case 3: Dog Gives Birth to Cow

In another case, a user joked on a forum that the picture below was “a dog giving birth to a cow.”

Dog Cow

Well, the user was joking but not Google AI. Google AI also took it seriously and told the user that “it is true that dogs have given birth to cows”…

Google AI search malfunction

Case 4: What Astronauts Do

A user asked Google AI “what does astronaut do” and the response was strange because it used vulgar words. See the response in the screenshot below

Gizchina News of the week


Google AI search malfunction

Lack of User Awareness and Deception

These erroneous suggestions are concerning because they occur within Google’s core search product, where users may not be well-versed in AI technology nuances. The risk lies in users unknowingly trusting and acting upon these misleading recommendations, highlighting the potential for deception and misinformation in AI-driven search results. While some outrageous suggestions like adding glue to pizza may be dismissed as absurd, more serious mistakes, such as recommending the consumption of rocks, pose a significant threat to user safety and well-being.

Read Also:  HMD Global to Release Nokia 2300 Replica with 2.4-inch screen and QVGA camera

Impact on User Trust and Search Engine Credibility

The series of mishaps involving Google’s AI search functionality has sparked widespread criticism and raised doubts about the reliability of AI-generated content. Users rely on search engines like Google for accurate and trustworthy information. if these incidents continue, it can erode the trust that users have in Google’s responses. The AI’s inability to differentiate between harmless jokes and genuine queries has led to a loss of confidence in the search engine’s capabilities and integrity.

The malfunctioning of Google’s AI search feature highlights the challenges and risks associated with integrating AI technology into search engines. As AI continues to evolve and play a more significant role in information retrieval, ensuring the accuracy and safety of search results becomes paramount. The incidents involving Google’s AI search serve as a cautionary tale for tech companies. It also underscores the importance of robust quality control measures and ethical considerations in AI development.

Google’s CEO Admits the issues

In an interview with The Verge, Google CEO Sundar Pichai admitted that the wrong response from Google’s AI overview is “inherent flaws” in large language models (LLMs), which are the core technology of the “AI Overview” feature. Pichai said that this problem is still unsolved.

Also, in response to the scrutiny, a spokesperson for Google acknowledged the issues but said that the examples above are not very common and do not represent most people’s experiences. He also added that Google’s systems aim to prevent policy-violating content from appearing in AI Overviews. Furthermore, the company claims that it will take action if such content does appear. 

Conclusion

In conclusion, the recent missteps of Google’s AI search feature serve as a stark reminder of the complexities and pitfalls associated with AI-driven technologies. As advancements in AI continue to reshape the digital landscape, maintaining user trust, ensuring data accuracy, and upholding ethical standards must remain at the forefront of technological innovation. The incidents underscore the critical need for transparency, accountability, and continuous improvement in AI systems to deliver reliable and safe search experiences for users worldwide.

Disclaimer: We may be compensated by some of the companies whose products we talk about, but our articles and reviews are always our honest opinions. For more details, you can check out our editorial guidelines and learn about how we use affiliate links.

Source/VIA :
Previous iFixit ends cooperation with Samsung: Says repair prices are too high
Next Celebrate the 4th Anniversary of Godeal24. Office 2021 Pro key is only $17.25/PC! Save more than 90%!