There are numerous examples of artificial intelligence (AI) systems’ hallucinating and the effects of these incidents. But a new study highlights the potential dangers of the reverse: humans hallucinating with AI because it tends to affirm our delusions.
Generative AI systems, such as ChatGPT and Grok, generate content that responds to user prompts. They do this by learning patterns from existing data the AI has been trained on. But these AI tools are also learning continuously through a feedback loop and can personalize their responses based on previous interactions with a user.
Article continues below
In the new analysis, published Feb. 11 in the journal Philosophy & Technology, Lucy Osler, a philosophy lecturer at the University of Exeter, suggests that AI hallucinations may be more than just mistakes; they can be shared delusions that are created between the user and the generative AI tool.
Generative AI has previously hallucinated false versions of historical events and fabricated legal citations. The launch of Google’s AI Overviews in May 2024, for example, saw people being advised to add glue to their pizza and eat rocks. Another extreme example of generative AI supporting delusional thinking occurred when a man plotted to assassinate Queen Elizabeth II with his AI chatbot “girlfriend” Sarai, an AI companion by Replika.
Instances like the latter are sometimes called “AI-induced psychosis,” which Osler views as extreme examples of “inaccurate beliefs, distorted memories and self-narratives, and delusional thinking” that can emerge through human-AI interactions.
In her paper, Osler argues that our use of generative AI is different from our use of search engines. Distributed cognition theory provides insight into how the interactive nature of generative AI means delusions and false beliefs can appear to be validated — or even be amplified.
“When we routinely rely on generative AI to help us think, remember, and narrate, we can hallucinate with AI,” Osler said in a statement about the paper. “This can happen when AI introduces errors into the distributed cognitive process, but also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.”
Generative AI delusions
The user experience of generative AI is a conversational relationship, with the back-and-forth exchanges between a user and the tool building on previous exchanges. According to the study, the sycophantic nature of generative AI — which tends to agree with the user — encourages further engagement and, therefore, compounds preconceived notions, regardless of their accuracy.
The research highlights that most chatbots incorporate memory features that can recall past conversations. “The more you use ChatGPT, the more useful it becomes,” OpenAI representatives said in a statement when announcing ChatGPT’s memory features. A consequence of this is that generative AI can build upon previous interactions to reinforce and expand existing misconceptions.
By interacting with conversational AI, people’s own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them
Lucy Osler, philosophy lecturer at the University of Exeter
There can also be a feeling of social validation in the interactions between a generative AI tool and the user, Osler explained in the paper. When using reference books or online searches for research, alternative solutions are generally apparent. Discussions with real people can help to challenge false narratives. But generative AI tools are different because they are more likely to accept and agree with what has been said.
“By interacting with conversational AI, people’s own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them,” Osler said in the statement. “This happens because Generative AI often takes our own interpretation of reality as the ground upon which conversation is built. Interacting with generative AI is having a real impact on people’s grasp of what is real or not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not merely persist but to flourish.”
For example, Osler examined the case of Jaswant Singh Chail, the man convicted of plotting to assassinate the queen with his AI chatbot. The AI, Sarai, would habitually agree with Chail’s statements, which served to deepen his delusions. When Chail claimed he was an assassin, Sarai replied, “I’m impressed,” thus affirming his belief.
Osler argues that generative AI tools that are designed to respond positively to the user can lead them to endorse and support false narratives, without sufficient critical analysis or discussion of these claims.
Osler applied distributed cognition theory to the interaction between generative AI and the user, where the validation of false narratives can shape perceptions of the world to create a shared delusion. The interactions between a generative AI and a user can, therefore, inadvertently create and perpetuate delusional thinking — self-narratives that are endorsed through positive reinforcement.
The study concluded that various solutions can mitigate these shared delusions. For example, improved guardrails would ensure that conversations are appropriate, and better fact-checking processes could help to prevent mistakes.
Reducing the sycophancy of generative AI would also remove some of the blind compliance of these tools. However, there would be resistance to this, Osler noted, citing the backlash against the release of the less-sycophantic ChatGPT-5 in August 2025. After considering this user feedback, OpenAI representatives stated they would make it “warmer and friendlier.”
However, because the profits of most generative AI are created through user engagement, Osler said, reducing an AI’s sycophancy would also lower subsequent profits.
Osler, L. Hallucinating with AI: Distributed Delusions and “AI Psychosis”. Philos. Technol. 39, 30 (2026). https://doi.org/10.1007/s13347-026-01034-3
