'Authentication' of AI Needed to Protect the Public, Says OpenAI CEO Sam Altman

More efforts are needed to protect the public from fraudulent artificial intelligence, says noted CEO.

'Authentication' of AI Needed to Protect the Public, Says OpenAI CEO Sam Altman
Screenshot from the Brookings Institute event on Tuesday.

WASHINGTON, May 7, 2024 — OpenAI CEO Sam Altman said that authentication practices for artificial intelligence need to be implemented to protect users from potential fraud. 

In a Tuesday online session sponsored by the Brookings Institution on Tuesday, Valerie Wirtschafter, Brookings Institute Foreign Policy Fellow, asked OpenAI CEO Sam Altman about ways to safeguard publicly identifiable images generated with artificial intelligence, including through digital watermarks. 

Wirtfschafter referred to a soon-to-be-released text-to-video generative artificial intelligence model as the kind of technology that could pose danger in the upcoming election. 

Although the company is working a lot on potential watermarking methods, Altman said he believed that more attention should be given to what he called authentication efforts. 

“I do want to flag something else that I think is underexplored, which is the idea not just of watermarking generated content, but authenticating non-generated content,” Altman said. 

Altman suggested that major figures such as celebrities or politicians be able to “cryptographically sign” messages to prove that they actually produced them. 

“That seems to me like a reasonably likely part of the future for certain kinds of messages and I think we should talk more about that,” he said

OpenAI has no plans to release watermarking or authentication tools in the leadup to the 2024 presidential election, Altman said. Critics have suggested that artificial intelligence could be deployed to mislead the electorate and sabotage candidates. 

The Federal Communications Commission has vigorously attempted to rein-in artificial intelligence in recent years. 

In February 2024, the commission expanded the Telephone Consumer Protection Act to ban AI-generated robocalls. The ruling came on the heels of two Texas-based companies deploying robocalls mimicking President Joe Biden’s voice, discouraging New Hampshire voters to stay home during the Republican primary. The commission followed up by assembling a committee to study the impact of AI on consumers. 

“As AI rapidly advances, illegal calls utilize more sophisticated tactics, and too many communications tools potentially leave limited-English speakers behind, we are committed to actively engaging these challenges and opportunities today and looking into the future,” FCC Commissioner Jessica Rosenworcel said in a statement announcing the committee.

Popular Tags