At DNC, Experts Sound Alarm on AI's Role in Election Misinformation

Even as deceptive content proliferates, social media companies have sharply downsized their election integrity departments.

At DNC, Experts Sound Alarm on AI's Role in Election Misinformation
Photo of Erik Nisbet, professor at Northwestern University (left), and Geoffrey Cowan, chair of communication leadership at USC’s Annenberg Center (right), taken by Jericho Casper.

CHICAGO, August 19, 2024 – The growing threat AI-driven disinformation poses to fair democratic processes dominated discussion on the first day of the Democratic National Convention, with experts warning of its potential to manipulate voter sentiment before and after the 2024 election.

During a panel hosted by the University of Southern California’s Annenberg School of Communication and Journalism, specialists emphasized how artificial intelligence and automation tools have made it alarmingly easy to create manipulative content which can shape public ideology and behavior.

Political misinformation has become more powerful, evocative, and manipulative, they said, with AI enabling the seamless splicing of clips, voices, and scripts to fabricate persuasive narratives. 

The panelists cautioned that AI-generated media and deepfakes were expected to surge in the lead-up to and aftermath of the 2024 election, exacerbated by the fact that social media companies have significantly scaled back their election integrity and content moderation departments in eight years since Russia’s infamous 2016 disinformation campaign.

In the past month, AI-driven disinformation campaigns that targeted the U.S. included: 

  • Iran’s Islamic Revolutionary Guard Corps breached the account of a former senior adviser to a presidential campaign, sending fake emails in an attempt to infiltrate the campaign’s accounts and database, reported Microsoft;
  • A network of fake social media accounts on Meta promoted a fictitious political advocacy group that recruited conservatives to run for office as independents. One such candidate won a primary election in Montana, reported The Washington Post;
  • A flood of anti-Trump TikTok videos, created using AI, was traced back to overseas accounts, reported The Wall Street Journal; and
  • Iranian groups used ChatGPT to generate and post articles and opinions on topics to deepen political divisions in the U.S., including the conflict in Gaza and the Paris Olympic Games, reported OpenAI.

“This is only August – what’s going to happen in December?” posed Adam Powell III, executive director of the USC Election Cybersecurity Initiative.

Erik Nisbet, professor of policy analysis & communications at Northwestern University, underscored these concerns, saying, “We know in battleground states, it’s going to be 50/50. Whoever wins, there will be a surge of misinformation and disinformation about the election results.”

Reflecting on the 2020 election, Nisbet noted, “We didn’t see much disinformation prior to the election; it came afterward. Most voter disinformation came from ‘blue-checked’ accounts. It was mostly domestic, attacking election integrity.”

Nisbet said that “AI is supercharging this, making videos more emotional and provocative while lowering the barriers to entry, allowing more people to get involved in high-impact disinformation events.” 

He also pointed to the twin threats of misinformation and what he called a “moral panic” that can erode public confidence. “People may not believe they’re being misled by misinformation, but they often believe others are, and that taints their confidence in the electoral results and in actual democratic processes. . . .We need to make sure that we don't inadvertently undermine people's confidence in elections and democratic governance by exaggerating the impact of mis- and disinformation.”

Michael Posner, professor of ethics and finance at New York University, criticized social media platforms for backtracking on commitments to increase content moderation and election integrity efforts since 2016. 

“I think the biggest risk is the extent to which traditional social media platforms are amplifying disinformation,” said Posner, claiming the companies have largely acted with impunity over the past eight years.

“We’re back to 2016,” Posner lamented. “In 2016, there was a sense that something had to be done – companies hired people for election integrity and content moderation. But now, platforms like X (formerly Twitter) have drastically cut these teams, with Elon Musk firing 80% of the staff. Meta’s election integrity department has shrunk from 300 to 60 individuals. Videos talking about a stolen election that were taken down from WhatsApp in 2020 are now being put back up.”

Jude Meche, chief information security officer at the Democratic Senatorial Campaign Committee, noted the deteriorating relationship between social media platforms and election security experts has further compounded the challenges ahead.

“Following the 2016 election, we had calls with X and with Meta all the time. They were working with us,” said Meche. “That no longer exists, that all faded quickly. We don't have counterparts in these companies anymore.”

Posner also pointed out that addressing disinformation was more challenging than ever, given that “we have a candidate who revels in all of this.” 

“Just yesterday, Trump was trying to delegitimize Harris's legitimate claim to candidacy itself, by calling it a coup, which it is not,” said León Krauze, journalist for The Washington Post. Trump’s statement came after New York Times columnist Maureen Dowd had also controversially referred to it as a coup.

Krauze said when Trump “feeds this machine that is disinformation, conspiracy theories, misinformation” and social media platforms choose to remain neutral, it may inadvertently contribute to the kind of political violence we saw in 2020 after Trump lost the election.

Popular Tags