Wikipedia, the Free Encyclopedia’s Post

🤖 Ever wondered why AI outputs are sometimes incorrect? This is called an AI hallucination – when artificial intelligence systems present false or misleading information as a fact. This analogy to human hallucinations isn’t just a tech quirk; it affects how much we can trust digital systems. One of the primary causes is called source-reference divergence. Even when working from accurate sources, AI models can combine training information in ways which are not faithful to the provided source. In one humorous scientific exploration of this phenomenon, ChatGPT was fed the false premise that churros, the fried-dough pastries, can be used for performing surgery. The AI, adhering to the input, created a detailed narrative claiming that “a study published in the journal Science” found that the dough’s pliability made it suitable for crafting surgical instruments. AI hallucination is still not fully understood. Researchers and developers are actively seeking methods to reduce these hallucinations, such as by having AI confirm answers with web search results or making multiple AIs debate to reach consensus. See more examples of AI hallucination ➡️ https://w.wiki/6Hz9

  • Futuristic graphic of a human mind contained within a blue digital space. Text says: Ever wondered why AI outputs are sometimes incorrect? This is called an AI hallucination – when artificial intelligence systems present false or misleading information as a fact.

To view or add a comment, sign in

Explore topics