In 2025, therapy and companionship have overtaken writing as the most common use of generative AI, according to a recent study by Filtered. Closely following are “organize my life” and “find purpose,” revealing that users are turning to AI not just for productivity—but for emotional support, guidance, and even help in shaping their sense of meaning, identity, and understanding of the world. At the very same time, a wave of reports in 2025 described a surge in so-called “ChatGPT-induced psychosis,” as clinicians and researchers noted growing cases of individuals experiencing delusions, paranoia, or emotional breakdowns linked to intense, prolonged interactions with AI chatbots.
Topics of conversation may include: What does the skyrocketing use of AI for therapy say about how deeply people crave a non-judgmental interlocutor—and does relying on a machine for that role risk flattening or distorting what we understand as emotional growth and support? If AI is increasingly used for emotional support and self-understanding, what responsibilities do developers, platforms, and policymakers have to ensure these systems are psychologically safe? If AI therapy is built on technology designed to be overly helpful, affirmational, and agreeable, can it truly offer effective support—or does it risk reinforcing users’ assumptions while avoiding the discomfort and challenge necessary for real emotional growth?