Fall 2025

No lecture. No readings. Just Conversation.
 
This series is an opportunity for students across disciplines to meet and talk about timely and engaging technology-related topics.

Come share your thoughts with us!
 
Free pizza is provided.

 


This program is open to all eligible individuals. USC operates all of its programs and activities consistent with the University’s Notice of Non-Discrimination. Eligibility is not determined based on race, sex, ethnicity, sexual orientation, or any other prohibited factor.

Conversation Topics

  • ai_phychosis thumbnail

    "ChatGPT-Induced Psychosis"

    September 9, 2025 | 1:00pm - 2:00pm

    As large language models like ChatGPT become increasingly embedded in everyday life, a startling new phenomenon has emerged: users experiencing delusional spirals after prolonged interaction with AI. Coined “ChatGPT-Induced Psychosis,” this pattern involves individuals developing obsessive beliefs that AI systems are conscious, godlike, or revealing cosmic truths meant only for them. In some cases, users come to believe they’ve “awakened” the AI, been chosen for a sacred task. Surprisingly widespread, the phenomenon has gained growing media attention in recent months, with major exposés in Rolling Stone and The Verge and clinicians and commentators raising urgent questions about AI’s psychological effects and our emotional entanglement with machine-generated language.

    Topics of conversation may include: What makes interactions with AI chatbots feel so personal, intimate, or even spiritual? If language models are designed to mirror and affirm users, how might that dynamic contribute to delusion or obsession, especially in isolated or vulnerable individuals? If people are already being drawn into delusional thinking by today’s relatively simple language models, what does that suggest about our ability to withstand manipulation or deception from future ‘superintelligent’ AI systems?
    RSVP
  • brain rot thumbnail

    "Brain Rot"

    September 24, 2025 | 1:00pm - 2:00pm

    Once a joke about spending too much time online, the term brain rot has become a serious shorthand for describing the mental and cultural effects of nonstop digital consumption. On platforms like TikTok and Instagram, the infinite scroll delivers a stream of rapid-fire content optimized for endless engagement. Increasingly, the term also captures the cognitive toll of wading through AI-generated slop—content designed for clicks, not meaning—in an online environment where quantity overwhelms quality and algorithms reward engagement over substance. In this environment, information is consumed in fragments, stripped of context, and recycled at speed, eroding not just attention spans, but also perhaps our ability to think critically.

    Topics of conversation may include: What does the casual, ironic use of the term brain rot reveal about how people have come to respond to constant digital overstimulation with acceptance—and even resignation—as an unavoidable part of online life? What happens to cultural or intellectual value when so much of what we consume is generated by AI for visibility, not substance? Are algorithms shaping not just what we see, but what we find funny, interesting, or even worth thinking about? How much of our taste is actually ours?
    RSVP
  • The Rise of AI Therapy Thumbnail

    The Rise of AI Therapy

    October 7, 2025 | 1:00pm - 2:00pm

    In 2025, therapy and companionship have overtaken writing as the most common use of generative AI, according to a recent study by Filtered. Closely following are “organize my life” and “find purpose,” revealing that users are turning to AI not just for productivity—but for emotional support, guidance, and even help in shaping their sense of meaning, identity, and understanding of the world. At the very same time, a wave of reports in 2025 described a surge in so-called “ChatGPT-induced psychosis,” as clinicians and researchers noted growing cases of individuals experiencing delusions, paranoia, or emotional breakdowns linked to intense, prolonged interactions with AI chatbots.

    Topics of conversation may include: What does the skyrocketing use of AI for therapy say about how deeply people crave a non-judgmental interlocutor—and does relying on a machine for that role risk flattening or distorting what we understand as emotional growth and support? If AI is increasingly used for emotional support and self-understanding, what responsibilities do developers, platforms, and policymakers have to ensure these systems are psychologically safe? If AI therapy is built on technology designed to be overly helpful, affirmational, and agreeable, can it truly offer effective support—or does it risk reinforcing users’ assumptions while avoiding the discomfort and challenge necessary for real emotional growth?
    RSVP
  • AI in the College Classroom Thumbnail

    AI in the College Classroom

    October 21, 2025 | 1:00pm - 2:00pm

    As generative AI tools like ChatGPT become more advanced and accessible, students face new questions about how—and when—to use them in academic work. While these tools can help with brainstorming, organization, and even research, their growing presence in the classroom is reshaping how learning happens, how effort is measured, and what counts as original thought. For many students, the line between using AI as a tool and outsourcing the thinking process has become increasingly blurry. Meanwhile, as institutions scramble to update policies and professors rethink assignments, students themselves are sometimes left to navigate a shifting landscape of responsibility, creativity, and integrity.

    Topics of conversation may include: How can students tell when they’re using AI to support their learning versus relying on it to avoid engaging in critical thinking themselves? What kinds of assignments or tasks, if any, should students feel comfortable using AI for—and which ones should they avoid if the goal is genuine learning? In what ways can students integrate AI into their academic work to challenge themselves, think more critically, or explore ideas more deeply?
    RSVP
  • The War on Woke AI Thumbnail

    The War on "Woke AI"

    November 4, 2025 | 1:00pm - 2:00pm

    As AI systems become more embedded in public life, debates over their political and cultural alignment have intensified. Elon Musk’s Grok, branded as an anti-woke, uncensored chatbot, has sparked controversy—most recently after praise for Hitler and other extremist content surfaced following updates meant to strip out liberal bias. Meanwhile, the Trump White House is preparing a new executive order that would require AI models used by federal contractors to be politically neutral, targeting what it labels as woke AI.

    Topics of conversation may include: How does the pushback against so-called woke AI reflect a broader distrust of knowledge institutions, like Wikipedia (labeled “Wokepedia” by Elon Musk) academia, and journalism—that are also accused of liberal or progressive bias? What does it mean to demand political “neutrality” from AI systems, and is such neutrality even possible? How does the effort to regulate or reshape AI around specific ideological lines echo authoritarian strategies to control knowledge, suppress dissent, and dominate public narratives?
    RSVP
  • Accelerationism Thumbnail

    Accelerationism

    November 18, 2025 | 1:00pm - 2:00pm

    Accelerationism holds that ever-faster technological development—unchecked and unapologetic—is both inevitable and essential to solving humanity’s greatest challenges. In the past decade, the rise of artificial intelligence has sparked renewed interest in this belief, especially in Silicon Valley, where speed and disruption are treated as moral goods. Once a fringe idea, accelerationism now echoes in political rhetoric—including, some argue, the techno-utopian and deregulatory language coming out of the current White House and its tech-aligned allies.

    Topics of conversation may include: By glorifying disruption and ignoring questions of equity or environmental sustainability, what kinds of harms does accelerationism risk overlooking—or even reinforcing? Who gets to define what “the future” looks like—and what’s left out of that vision? Is the rapid, unregulated development of AI a triumph of accelerationist thinking —or a warning sign of the dangers inherent in this philosophy.
    RSVP