Jump directly to the content
AI-UH OH

AI hallucinations pose ‘direct threat’ to progress of humanity as experts warn everyone to avoid dangerous mistake

AI should not be blindly trusted

All recommendations within this article are informed by expert editorial opinion. If you click on a link in this story we may earn affiliate revenue.

ARTIFICIAL intelligence is a good tool for many reasons, but some of its built-in issues could lead to dangerous situations, experts warned.

One major issue with AI is that capable of generating false information that it believes to be true which is referred to as an AI hallucination.

Researchers are warning people to not believe everything AI says
1
Researchers are warning people to not believe everything AI says

Researchers at the Oxford Internet Institute have reviewed some examples of AI hallucinations and have concluded that people need to stay aware of mistakes when using the tech.

The findings were published in the journal Nature Human Behavior.

Specifically, the researchers looked over AI as a "Large Language Model" which is what chatbots use in order to find information on a wide scope and then condense it into an informed response.

However, AI can be incorrect in its responses but sound convincing in doing so.

These actions are AI hallucinations because the tech is generating false content that they present as accurate. 

“LLMs are designed to produce helpful and convincing responses without any overriding guarantees regarding their accuracy or alignment with fact,” the paper explained.

The paper explained that humans have put too much trust in AI and believe everything it says as the exact truth.

Researchers are warning people to not believe everything AI says and that it would be a good idea to fact-check the information it gives you as a safety protocol.

Believing everything AI says can be dangerous because it could stir people into unfortunate situations in some cases.

“People using LLMs often anthropomorphize the technology, where they trust it as a human-like information source,” Professor Brent Mittelstadt, co-author of the paper said.

“This is, in part, due to the design of LLMs as helpful, human-sounding agents that converse with users and answer seemingly any question with confident-sounding, well-written text.

“The result of this is that users can easily be convinced that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.”

The researchers said that AI hallucinations are a direct threat to the progress of humanity for these reasons if humans stop concluding their own thoughts.

It was also warned that AI hallucinations are a direct threat to science and scientific truth.

Topics