Conversations Series: Spring 2024

No lecture. No readings. Just Conversation.
This series is an opportunity for students across disciplines to meet and talk about timely and engaging technology-related topics.

Come share your thoughts with us!
Free pizza is provided.

  • conversations thumbnail

    Tech CEOs

    February 6, 2024 | 3:00pm - 4:00pm

    In today's technology landscape, the immense power, wealth, and influence wielded by tech CEOs has become more and more apparent. Elon Musk, known for his capricious management style, unilaterally alters X’s policies almost weekly, highlighting the unpredictable nature of such concentrated authority. The recent drama at OpenAI, whereby CEO Sam Altman was abruptly dismissed by the board only to be hastily reinstated with board members resigning, underscores the volatile and often unregulated governance within major tech companies. These events reflect a growing concern about the unchecked power of tech leaders, raising questions about accountability and the broader impact of their decisions on society and technology's ethical landscape.

    Topics for discussion during this conversation might include: In light of the OpenAI incident involving Sam Altman, what does the lack of consistent and transparent leadership in major tech firms reveal about the ethical and operational challenges facing the tech industry today? How does the concentration of power in the hands of individual technology CEOs impact the democratic nature of technology governance and the representation of diverse user interests?
  • conversations thumbnail

    Apple Vision Pro

    February 20, 2024 | 3:00pm - 4:00pm

    Apple's Vision Pro, with its ambitious aim to revolutionize spatial computing, raises critical questions about use, accessibility, and even the necessity of such advancements. By mandating the use of a headset for interacting with computers, it challenges conventional computing norms, prompting skepticism about whether this is a positive development or merely a solution in search of a problem. While technologically impressive, the Vision Pro (and VR, more generally) invites a deeper examination of its practicality and the broader implications of fostering such a significant shift in everyday digital interactions.

    Topics for discussion during this conversation might include: Does the introduction of technologies like Apple's Vision Pro, which require headsets for spatial computing, align with the actual needs and preferences of users? Considering the broader societal implications, what are the potential consequences of transitioning to spatial computing platforms like Vision Pro on issues of digital divide and accessibility? In what ways could the increased use of Apple's Vision Pro and similar spatial computing technologies impact interpersonal relationships and social skills, especially among younger generations?
  • conversations thumbnail

    Technology Nobody Wants

    March 5, 2024 | 3:00pm - 4:00pm

    In 1954, Jacques Ellul's introduced his concept of 'technique,' the idea that technological development is an autonomous process, driven primarily by an internal logic of efficiency and problem-solving, rather than by human desires or societal needs. We sometimes see this concept at work with technologies that are seemingly developed because they are the next, most efficient version, or simply because they can be - for instance, the unsuccessful endeavors of the 1970s to market civilian supersonic aircraft that absolutely no one wanted to use. More recent examples might be Humane’s AI Pin, Rabbit R1, VR movies, and the Facebook Phone.

    Topics for discussion during this conversation might include: What’s your favorite example of a technology (or vapourware) that nobody needs or wants? What role do cultural, social, and economic factors play in determining the success or failure of new technologies? How do the principles of Jacques Ellul's 'technique' manifest in the development of technologies that end up being unwanted, and what does this suggest about the relationship between technological innovation and societal needs?

  • conversations thumbnail

    GenAI + Disinformation + 2024 Elections

    March 19, 2024 | 3:00pm - 4:00pm

    In the 2024 U.S. presidential election, the use of generative AI for disinformation poses a significant threat, especially given the existing deep divisions among American voters. This technology's potential to create convincing fake content could further erode public trust and manipulate electoral outcomes. The danger is amplified globally, as nearly half the world's population faces major elections in 2024, raising concerns about the widespread impact of AI-driven disinformation on democratic processes worldwide.

    Topics for discussion during this conversation might include: How can voters differentiate between genuine information and AI-generated disinformation during the 2024 U.S. presidential election, and what strategies can be implemented to educate the public about this issue? In what ways might the deep political divisions in the United States amplify the effects of AI-generated disinformation during the 2024 election, and how can these divisions be addressed to mitigate this impact? Considering the global scale of elections in 2024, how can international cooperation and policy development be utilized to combat the spread of AI-driven disinformation across different countries and political systems?
  • ai art workshop

    Art and the Artist

    April 2, 2024 | 3:00pm - 4:00pm

    In Collaboration with Corpus Callosum

    At the core of every artistic creation lies an artist. This conversation aims to explore how art and artist mutually influence each other, questioning whether art can transcend the creator's identity. Particularly intriguing is the case of art produced by generative technologies, where the artist's identity is often ambiguous. We will explore whether the context of art's creation should impact our appreciation and understanding of it.

    Topics for discussion during this conversation might include: How do personal experiences, beliefs, and emotions of artists manifest in their work? In what ways does creating art transform the artist? Can and should art be viewed independently from its creator’s identity? How does knowing an artist’s background influence the audience's experience? With AI in art, how do we interpret the relationship between art and artist when the creator is an algorithm? How does an artist's moral stance affect the public reception of their art?
  • conversations thumbnail


    April 16, 2024 | 3:00pm - 4:00pm

    P(doom), commonly referred to as “p doom,” is the estimated probability that the emergence of advanced artificial intelligence will lead to the end of humanity. This concept has recently become a topic of intense discussion and debate following the rapid development of AI technologies and related concerns about their potential to surpass human intelligence and control. It’s even become somewhat commonplace for individuals in Silicon Valley to introduce themselves by asking about one's p(doom) score (between 0-100%), a conversational shorthand for gauging beliefs about the likelihood of AI leading to humanity's downfall.

    Topics for discussion during this conversation might include: How can the development of general and super AI be ethically and safely managed to minimize the p(doom) score, ensuring that such technologies do not pose existential risks to humanity? Is the use of the term p(doom) in Silicon Valley conversations really more about hyping the advancements in AI and creating a sensational narrative, rather than a genuine concern for the existential risks posed by these technologies? And, of course, what is your p(doom) score and why?