U.S. Must Take Lead on Global AI Regulations: State Department Official

Call for leadership comes during pivotal time in AI development.

U.S. Must Take Lead on Global AI Regulations: State Department Official
Photo of State Department official Jennifer Bachus in December 2014 by Ardian Nrecaj used with permission

WASHINGTON, May 31, 2023 – A State Department official is calling for a United States-led global coalition to set artificial intelligence regulations.

“This is the exact moment where the US needs to show leadership,” Jennifer Bachus, assistant secretary of state for Cyberspace and Digital Policy, said last week on a panel discussing international principles on responsible AI. “This is a shared problem and we need a shared solution.”

She opposed pitting the U.S. and China against one another in the AI race, saying it would “ultimately always lead to a problem.” Instead, Bachus called for an alliance of the United States, the European Union, and Japan to take the lead in creating a legal framework to govern artificial intelligence.

The introduction of OpenAI’s ChatGPT earlier this year sent tech companies in a rush to create their own generative AI chatbot systems. Competition between tech giants has heated up with the recent release of Google’s Bard and Microsoft’s Bing chatbot. Similar to ChatGPT in terms of its vast language model, these chatbots can also access data from the internet to answer queries or carry out tasks.

Experts are concerned about the dangers posed by this unprecedented technology. On Tuesday, hundreds of tech experts and industry leaders, including OpenAI’s CEO Sam Altman, signed a one-sentence statement calling the existential threats presented by A.I. a “global priority” on par with “pandemics and nuclear conflicts.” Earlier in March, Elon Musk joined several AI experts signing another open letter urging for a pause on “giant AI experiments.”

Despite the pressing concerns about generative AI, there is rising criticism that policymakers are slow to put forth adequate legislation for this nascent technology. Panelists argued this is partly because legislators have difficulty understanding technological innovations. Michelle Giuda, director of Krach Institute for Tech Diplomacy, argued for a more proactive contribution from the academic community and tech firms.

“There is a risk of relying too much on the government to regulate ahead of where innovation is going and providing the clarity that’s needed,” said Giuda. “We all know that the government isn’t going to stay ahead of the innovation curve, but this is an ongoing dialogue between tech companies, governments and civil society.”

Microsoft’s Chief Responsible AI Officer, Natasha Crampton, agreed that developers and experts in the field must play a central role in crafting and implementing legislation pertaining to artificial intelligence. She did, however, mention that businesses using AI technology should also share part of the responsibility.

“It is our job to make sure that safety and responsibility is baked into these systems from the very beginning,” said Crampton. “Making sure that you are really holding developers to very high standards but also deployers of technology in some aspects as well.”

Earlier in May, Sens. Michael Bennet, D-C.O., and Peter Welch, D-VT. introduced a bill to establish a government agency to oversee artificial intelligence. The Joe Biden administration also announced $140 million in funding to establish seven new National AI Research institutions, increasing the total number of institutions in the nation to 25.