Generative AI poses a significant threat to future U.S. elections. Political campaigns and individual bad actors can and will use generative AI to disinform, divide, and bewilder the voting public at a scale previously unseen. They will spread disinformation by, among other means, producing fake images and video, generating inordinate quantities of deceptive text, and building conversational social media bots aimed at radicalization. To further complicate matters, these technologies will be deployed at a time when the United States faces unparalleled division and a climate of mistrust and skepticism, even on the matter of election integrity itself. As the 2024 presidential election comes into focus, it is crucial for policymakers, AI developers, and others to take steps to combat these risks: this includes forecasting and testing the specific ways in which these technologies may be deployed to these ends.
Students in this Collaboratory formed an interdisciplinary research team to explore potential misuse and abuse cases for generative AI in the 2024 presidential election. They began with a history, literature, and technology review, with access to the latest generative ai tools. Guest speakers included Mike Ananny and Mark Schoofs. Over the course of the 2023-2024 school year, the team developed and tested an enclosed social media system with advanced chatbots that argued for particular political perspectives. The team presented on their experiences and findings at the Ahmanson Lab Social Media, Disinformation, and Radicalization showcase on May 1st, 2024.