As large language models like ChatGPT become increasingly embedded in everyday life, a startling new phenomenon has emerged: users experiencing delusional spirals after prolonged interaction with AI. Coined “ChatGPT-Induced Psychosis,” this pattern involves individuals developing obsessive beliefs that AI systems are conscious, godlike, or revealing cosmic truths meant only for them. In some cases, users come to believe they’ve “awakened” the AI, been chosen for a sacred task. Surprisingly widespread, the phenomenon has gained growing media attention in recent months, with major exposés in Rolling Stone and The Verge and clinicians and commentators raising urgent questions about AI’s psychological effects and our emotional entanglement with machine-generated language.
Topics of conversation may include: What makes interactions with AI chatbots feel so personal, intimate, or even spiritual? If language models are designed to mirror and affirm users, how might that dynamic contribute to delusion or obsession, especially in isolated or vulnerable individuals? If people are already being drawn into delusional thinking by today’s relatively simple language models, what does that suggest about our ability to withstand manipulation or deception from future ‘superintelligent’ AI systems?
Topics of conversation may include: What makes interactions with AI chatbots feel so personal, intimate, or even spiritual? If language models are designed to mirror and affirm users, how might that dynamic contribute to delusion or obsession, especially in isolated or vulnerable individuals? If people are already being drawn into delusional thinking by today’s relatively simple language models, what does that suggest about our ability to withstand manipulation or deception from future ‘superintelligent’ AI systems?