- cross-posted to:
- hackernews@lemmy.smeargle.fans
- cross-posted to:
- hackernews@lemmy.smeargle.fans
Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.
Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.
Google search isnt a hallucination now though.
It instead proves that LLMs just reproduce from the model they are supplied with. For example, the “glue on pizza” comment is from a reddit user called FuckSmith roughly 11 years ago.
What do you mean by that? This isn’t some secret but literally how LLMs work. lol What people mean by hallucinating is when LLMs “create” facts that aren’t any. Be it this genius recipe of glue pizza, or any other wild combination of its model’s source material. The whole cooking thing is a great analogy actually because it’s like all of their fed information are the ingredients, and it just spits out various recipes based on those ingredients, without any guarantee that it is actually edible.
There are a lot of people, including google itself, claiming that this behaviour is an isolated and basically blamed users for trolling them.
https://www.bbc.com/news/articles/cd11gzejgz4o
I was working on the concept of “hallucinations” being things returned that are unrelated to the input query, not directly part of the model as with the glue-pizza.
Your link does not match your statement.
That’s precisely what they are saying.
I’m sorry but reading this as “Google blames users for trolling them” is either pure mental gymnastics or mental illness.