
The Reclaim Project weaves together media production, AI research, game design, and curriculum development together with scholarly inquiry into misinformation, aiming to challenge the spread of toxicity and hate speech online. The project’s three research areas—Social Media, Game Design, and AI Curriculum—bring together students and faculty from across disciplines to explore the intersections of media, technology, and misinformation, with a shared commitment to democratic dialogue and creative resistance to online hate.
Apply to be a part of the Reclaim Collaboratory below. Applications are due September 9, 2025.
Research Areas
Social Media
Students participating in this research area will create original content that will counter misinformation in spaces like Instagram and YouTube. This team will merge creative media production with scholarly research on misinformation, online hate, and social networks. They will also develop strategies and templates for the production and circulation of content and work with a variety of influencers.
Game Design
Students in this research area will work with an interdisciplinary team of faculty, designers, and peers to develop a series of games that invite players to think critically about the cultural values shaping artificial intelligence. Too often, AI is framed as the product of neutral technical processes, when in fact it reflects broader ways of relating to the world. For instance, many current AI systems are shaped by logics of control, dominance, extraction, and efficiency. But what if they were instead informed by values like openness, care, and mutuality? Through iterative design and playtesting, students will help develop game mechanics that surface how ways of approaching the world, from control to care, have shaped or could shape the development of AI.
AI Curriculum
Students in this research area will engage directly in building and training a machine learning detection model for toxic forms of online speech while also reflecting on how that process may serve as a core pedagogical structure for a critical AI literacy curriculum. Students will work collaboratively to develop a framework for identifying specific forms of toxic speech; annotate a collection of social media posts to create a labeled dataset; and train a machine learning model on the annotated data and evaluate its performance. Along the way, students will also help design a curriculum that uses the process of model development as a basis for critical AI literacy. The curriculum will be grounded in their reflections on the social, ethical, and epistemological complexities that surface in the modeling process—for example, how human bias might become amplified through labeled training data, the tendency of models to flatten complex social phenomena, and the need for greater transparency given the difficulty of tracing how AI models arrive at specific outcomes.