Artificial Intelligence in the 2024 US Presidential Election

Artificial Intelligence in the 2024 US Presidential Election

The New AI Project’s team of writers and researchers tackles AI in the Election through a compelling spotlight on how artificial intelligence is affecting the democratic process and the integrity of the presidential race.

The AI Vote: Artificial Intelligence Influences the Closest Presidential Race in Decades

As the 2024 U.S. presidential election approaches, the rise of generative AI is amplifying the threat of disinformation, potentially skewing public opinion in one of the closest races in decades. From AI-generated images misrepresenting candidates to foreign interference campaigns leveraging deepfakes, malicious actors now find it easier than ever to spread false narratives. Experts warn that while AI may not introduce entirely new threats, it significantly accelerates existing ones, challenging both voter trust and the integrity of democratic processes. With regulatory measures struggling to keep pace, the pressing question remains: how can we tame AI to work in support of our democratic institutions instead of against them? 

View the article, written by Clare Hill ’25, in its entirety on the New AI Project’s LinkedIn page.

Imagery created using Image Creator from Designer

Expl(ai)ned

Stay up-to-date with recent developments in the world of AI via The New AI Project LinkedIn page with monthly newsletters, weekly reads, and deep dives.  Each month, The New AI Project publishes Expl(ai)ned, so you can keep up with recent news in the world of Generative AI, including new features, AI in the workplace, social and ethical implications, regulations, and research revelations. Be sure to follow The New AI Project so you never miss an article.

The New AI Project is sponsored by the Technology and Digital Studies Program and the Office of Digital Strategy in the College of Arts & Letters, and aims to support the Notre Dame community of students, faculty, staff, alumni, and friends understand, apply, and evaluate recent developments in Artificial Intelligence.

Graham Wolfe ’26 serves as editor of The New AI Project, with Clare Hill ’25, Aiden Gilroy ’27, Mary Claire Anderson ’26Annie Zhao ’25, Gaby Sanchez ’26 all contributing. John Behrens ’83, Director and Professor of the Practice of Technology and Digital Studies, Concurrent Professor of the Practice in the Department of Computer Science and Engineering, and Director of the Office of Digital Strategy in the College of Arts & Letters is the faculty advisor for The New AI Project.

AI and the 2024 Election: Revolutionary Force or Overhyped Distraction?

As the 2024 presidential election continues to unfold, many Americans are concerned about the role of AI in transforming political campaigns and voter behavior. While some argue that artificial intelligence influences how electorates perceive candidates via the spread of AI-generated memes and deep fakes, others say that its impact is being exaggerated and that AI’s contributions are not fundamentally changing presidential campaigning or voter behavior. As the debate continues, the question remains: is AI revolutionizing the election landscape, or is its influence being overstated?

To read the article in its entirety, please visit The New AI Project’s LinkedIn page.

Imagery created using Image Creator from Designer.

Election: US Updates and International Comparisons

In February 2024, we shared that Big AI companies are working to make sure that chatbots give unbiased voter information to users. Their progress was then tested by Proof News, revealing that these chatbots are not yet prepared. OpenAI’s ChatGPT-2, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and Mistral’s Mixtral all failed simple questions about the US election process, providing harmful, biased misinformation 40% of the time. In one case, Llama even suggested that there was a Vote by Text option in California. To temporarily fix the issue, Google has taken the approach of blocking users from asking Gemini election-related questions in any country that currently has a major election. Currently, it is being rolled out in the US and India. While this does not solve the problem, it may prevent misinformation from spreading and harming voters.

To read the article in its entirety, please visit The New AI Project’s website.

Election Update: Outlaw of Robocalls & More

In January 2024, news was spread about fake robocalls that impersonated Joe Biden to give ill-intentioned voting advice. In February 2024, these calls were outlawed. The Federal Communications Commission made any robocalls created with AI voice-cloning illegal, with the aim of maintaining a fair presidential election and inhibiting misinformation. While this voice-specific AI has been banned in election usage, AI can capture human likeness and be wielded politically in other ways such as in video, image, and text. As a result, tech giants have pledged to help fight election fraud. For example, Microsoft, Meta, Google, OpenAI, Anthropic, X (formerly Twitter), and others signed a pledge that they would create a common strategy for combating politically misleading deepfakes. 

To read this article in its entirety, please visit The New AI Project’s website.

Imagery created using Image Creator from Designer.

Should We Tame AI? Regulating AI and Freedom of Speech

Usually, we discuss AI regulation as a balancing act between the ethical use of AI and innovation. What happens when the conversation focuses on the balance between the ethical use of AI and our First Amendment right to freedom of speech?

YouTuber Christopher Kohls claims that California’s new AI laws go against his freedom of speech. Known on YouTube as “Mr Reagan,” Kohls posted a video that showed Kamala Harris speaking poorly about herself and her administration – Kohls utilized AI-based technology to “clone her voice and generate a self-mocking imitation.” This is called a deepfake – a type of output that AI technologies can generate that appears to be real. Elon Musk reposted the video, and at the time of writing, this repost had garnered more than 136 million views. While some users may have interpreted a satirical tone of the video, it was realistic enough that some viewers may have mistaken it for a real audio clip of the vice president. When California Governor Gavin Newsom signed a range of AI-related regulations in September of 2024, aimed at restricting the spread of deceptive media about candidates, Kohls sued to block one of these laws based on the right to freedom of speech. After hearing the arguments, U.S. District Judge John A. Mendez “granted a preliminary injunction against the law, stating that it likely violates the First Amendment.” Even though the California laws outlined exceptions for parody and satire, the judge did not believe that the laws contained enough nuance to protect these freedoms of speech.

To read this article about the use of deepfakes and content labeling in its entirety, please click here to be taken to The New AI Project‘s October 2024 newsletter Expl(ai)ned: AI Companionship, Podcasts, and Freedom of Speech.

Imagery created using Image Creator from Designer.

back to top