Alternative AI Futures

Alternative AI Futures is a hands-on workshop series that invites students to directly experiment with, critique, and modify contemporary AI systems in order to reimage how different design choices, cultural values, and shared goals could produce radically different forms of AI. 

Each workshop will begin with a simple but provocative prompt: What if AI were built differently? More specifically, what if factors like transparency, accountability, and social responsibility were the primary values guiding AI system design? What would those systems look like? How would they behave? And how might they address commonly held concerns about the structural limitations and/or social consequences of AI? These “what if” questions will foreground the assumptions embedded in current AI systems and open space for imagining alternative futures for how AI might be built and used differently.

These 1.5-hour workshops will be split between hands-on experimentation and discussion. Students will work directly with small-scale machine learning systems and language models, using tools like Google Collab Notebooks to modify and reconfigure how these systems behave. Through guided, hands-on exploration, students will experiment with training data, embedding spaces, system instructions, and model parameters, among other elements, to see how changes at the technical level can give rise to alternative forms of AI in the form of small, working prototypes. The discussion portion of each workshop will build on this hands-on work, allowing students to reflect on how technical design choices give rise to broader cultural, social, ethical, and political impacts.

AI as Instruments

 
What if AI was designed to be engaged as a tool or instrument rather than a persona in a chat interface
 

AI as Systems for Critical Thinking


What if AI was structured to encourage critical thinking and make users active participants in reasoning and research rather than passive recipients of answers?

 

AI as Environmentally Accountable


What if AI systems made visible their use of energy, water, labor, and compute in ways that meaningfully altered their behavior or their interaction with users