Senators Urge Tech Giants to Address Election Disinformation

Tech CEOs must submit plans by Oct. 1 to address AI election disinformation.

Senators Urge Tech Giants to Address Election Disinformation
Photo of Sen. Amy Klobuchar, D-Minn., by Ethan Miller.

WASHINGTON, Sept. 23, 2024 – In the lead-up to the 2024 general election, at least two senators have intensified their push for major tech companies to address election-related misinformation and disinformation being supercharged by artificial intelligence.

In a letter sent last week to the CEOs of Meta, X, Alphabet, Twitch, and Discord, Sens. Amy Klobuchar, D-Minn., and Mark Warner, R-Va. urged tech leaders to detail the specific actions their platforms are taking to combat AI-generated disinformation. The senators requested thorough responses by Oct. 1, 2024.

Among their questions for the tech CEOs were: How their companies are addressing media impersonation, collaborating with other platforms to curb disinformation, and staffing up to manage election misinformation.

“We write to express our persisting concerns about the spread of election-related disinformation on your platforms as the 2024 general election is quickly approaching and to call on your companies to prioritize taking decisive action, including bolstering content moderation resources, to combat deceptive content intended to mislead voters or sow violence,” the senator’s letter urged.

Klobuchar and Warner referenced AI-generated deepfakes already in circulation of Vice President Kamala Harris and former President Donald Trump that have reached millions, warning that this technology has made it harder for voters to distinguish real from fake content. 

“Your companies are on the frontlines of the risks to our democracy posed by online disinformation and technology-enabled election influence, and it is for these reasons that we urge you to prioritize taking action to ensure that you have the policies, procedures, and staff in place to counter and respond promptly to these threats.”

The urgency of the senators’ request was underscored by recent events. Just last week, Elon Musk, owner of X and one of the platform's most influential users, reposted an altered Kamala Harris campaign video. Musk allegedly reposted the video, originally from June, in response to California Gov. Gavin Newsom’s signing legislation on Tuesday banning digitally altered political "deepfakes".

Beyond the call for tech companies to act, there has been growing pressure on the Federal Communications Commission to introduce AI disclosure rules for political ads. Last week, eight Democratic senators, led by Sen. Ben Ray Luján, D-N.M., pressed the FCC to pass its proposal requiring TV and radio stations to disclose if political ads contained AI-generated content. 

However, the FCC’s authority is limited to traditional broadcast media, leaving platforms like X and Meta – where much of this AI-generated content circulates –outside of its regulatory reach.

With federal agencies grappling with regulatory gaps, states have taken matters into their own hands. At least 26 states have passed or are considering bills regulating the use of generative AI in election-related communications. 

There have already been instances of generative AI being "used to confuse — and even suppress — voters," Sen. Warner told Axios in an email. "I don't think genAI developers or platforms are taking the misuse potential serious enough," added Warner, who is Chairman of the Senate Intelligence Committee.

Popular Tags