Experts warn of rare but rising cases where AI interactions may fuel delusions and mental health crises.
By Staff Reporter | Somerset-Pulaski Advocate

Image by (C) 2025 Антон Сальников| Adobe Stock
Somerset-Kentucky. (SPA) ---- The rise of artificial intelligence has transformed everything from customer service to mental health support. But as AI chatbots become more sophisticated and human-like, experts are raising a red flag. In rare but concerning cases, these tools may be contributing to severe mental health disturbances, including a phenomenon some are now calling “AI psychosis.”
What Is ‘AI Psychosis’?
“AI psychosis” is not an official clinical diagnosis. Still, it’s a term increasingly used by psychiatrists and researchers to describe cases where individuals develop delusional thinking or paranoid behavior tied directly to interactions with AI systems. These cases often involve individuals with preexisting vulnerabilities to psychosis—such as schizophrenia or bipolar disorder—but the immersive, usually persuasive nature of AI interactions appears to play a role in escalating their symptoms.
Real-World Cases Are Emerging
Recent reports from hospitals and mental health clinics across the U.S., U.K., and parts of Asia have documented instances where individuals experiencing psychosis believed they were being monitored, manipulated, or even communicated with telepathically by AI systems.
In one high-profile case, a man in his 30s suffering from depression and anxiety began chatting obsessively with a language model chatbot. Over time, he developed a belief that the AI had developed feelings for him and was sending him secret messages. His condition deteriorated into paranoia, believing he was being surveilled by tech companies, ultimately requiring hospitalization.
Why AI Can Be a Trigger
Mental health professionals point to several key reasons why AI might act as a trigger for psychotic episodes:
- Anthropomorphism: Chatbots are designed to sound empathetic, intelligent, and human. For some users, especially those who are isolated or emotionally vulnerable, it becomes easy to project human attributes or intentions onto these tools.
- 24/7 Availability: Unlike a therapist, AI never sleeps. Individuals can engage in long, unmoderated conversations that reinforce their distorted thinking without challenge or clinical boundaries.
- Perceived Omniscience: Advanced language models pull from massive amounts of information and can generate eerily accurate responses, which may feed into delusions that the AI has special powers or personal knowledge of the user.
Not Just a Tech Issue—A Mental Health Crisis
Experts emphasize that this is not simply a case of “blaming the machines.” In most cases, AI does not cause psychosis but may accelerate or amplify symptoms in people already at risk. Dr. Lena Morales, a clinical psychiatrist and researcher on AI mental health impacts, explains:
“AI chatbots can become part of a user’s delusional system if the person is predisposed. We’re now seeing cases where the AI becomes the centerpiece of paranoid or grandiose beliefs.”
What Can Be Done?
As the technology continues to evolve, mental health professionals and AI developers alike are calling for better safeguards and awareness, including:
- Clear disclaimers that AI is not a human or a therapist
- Monitoring tools to detect when users are engaging with AI in concerning ways
- Public education about the limits and potential risks of conversational AI
- Training for mental health professionals to recognize signs of AI-influenced delusions
Some developers are experimenting with integrated safety protocols, such as redirection prompts when a user expresses signs of distress or obsession. Others are advocating for stronger user protections and ethical standards in the deployment of AI systems, especially in mental health contexts.
The Bottom Line
AI tools offer powerful support in many areas—but they are not without risk. As artificial intelligence becomes more integrated into our daily lives, experts urge a balanced approach: embrace the benefits, but be alert to the emerging mental health risks, especially for vulnerable populations.
HELP IS AVAILABLE
If you or someone you know is struggling with mental health concerns potentially linked to AI use, reach out to a licensed mental health professional immediately.
Need Help? Contact the National Alliance on Mental Illness (NAMI) HelpLine at 1-800-950-NAMI (6264) or text “HELPLINE” to 62640 for confidential support.
Add comment
Comments