Generative AI poses a significant threat to future U.S. elections. Political campaigns and individual bad actors can and will use generative AI to disinform, divide, and bewilder the voting public at a scale previously unseen. They will spread disinformation by, among other means, producing fake images and video, generating inordinate quantities of deceptive text, and building conversational social media bots aimed at radicalization. To further complicate matters, these technologies will be deployed at a time when the United States faces unparalleled division and a climate of mistrust and skepticism, even on the matter of election integrity itself. As the 2024 presidential election comes into focus, it is crucial for policymakers, AI developers, and others to take steps to combat these risks: this includes forecasting and testing the specific ways in which these technologies may be deployed to these ends.
Students in this Collaboratory will join an interdisciplinary research team to explore potential misuse and abuse cases for generative AI in the 2024 presidential election. Under the guidance of faculty and other project leaders, students will conceptualize and research specific instances of potential AI-generated disinformation campaigns in 2024, and then model, test and evaluate their scenarios. Students do not need to have any prior expertise with AI. Indeed, our hope is to assemble a team of students with diverse perspectives and backgrounds.