Senate Witnesses Call For AI Transparency

Regulatory AI transparency will increase federal agency and company accountability to the public.

Senate Witnesses Call For AI Transparency
Photo of Richard Eppink of the American Civil Liberties Union of Idaho Foundation

WASHINGTON, May 16, 2023 – Congress should increase regulatory requirements for transparency in artificial intelligence while adopting the technology in federal agencies, said witnesses at a Senate Homeland Security and Governmental Affairs Committee hearing on Tuesday.

Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms.

The hearing addressed implementing AI in federal agencies. Congress is concerned about ensuring that the United States government is prepared to capitalize on the capabilities afforded by AI technology while also protecting the constitutional rights of citizens, said Sen. Gary Peters, D-Michigan.

The United States “is suffering from a lack of leadership and prioritization on these topics,” stated Lynne Parker, director of AI Tennessee Initiative at the University of Tennessee in her comments.

In a separate hearing Tuesday, CEO of OpenAI Sam Altman said that is is “essential that powerful AI is developed with democratic values in mind which mean US leadership is critical.”

Applications of AI are immensely beneficial, said Altman. However, “we think that regular intervention by governments will be crucial to mitigate the risks of increasingly powerful models.”

To do so, Altman suggested that the U.S. government consider a combination of licensing and testing requirements for the development and release of AI models above a certain threshold of capability.

Companies like OpenAI can partner with governments to ensure AI models adhere to a set of safety requirements, facilitate efficient processes, and examine opportunities for global coordination, he said.

Building accountability into AI systems

Siezing this moment to modernize the government’s systems will strengthen the country, said Daniel Ho, professor at Stanford Law School, encouraging Congress to lead by example to implement accountable AI practices.

An accountable system ensures that agencies are responsible to report to the public and those that AI algorithms directly affect, added Richard Eppink of the American Civil Liberties Union of Idaho Foundation.

A serious risk to implementing AI is that it can conceal how the systems work, including the bad data that they could be trained on, said Eppink. This can prevent accountability to the public and puts citizen’s constitutional rights at risk, he said.

To prevent this, the federal government should implement transparency requirements and governance standards that would include transparency during the implementation process, said Eppink. Citizens have the right to the same information that the government has so we can maintain accountability, he concluded.

Parker suggested that Congress appoint a Chief AI Director at each agency that would help develop Ai strategies for each agency and establish an interagency Chief AI Council to govern the use of the technology in the Federal government.

Getting technical talent into the workforce is the predicate to a range of issues we are facing today, agreed Ho, claiming that less than two percent of AI personnel is in government. He urged Congress to establish pathways and trajectories for technical agencies to attract AI talent to public service.

Congress considers AI regulation

Congress’s attention has been captured by growing AI regulatory concerns.

In April, Senator Check Schumer, D-N.Y., proposed a high-level AI policy framework focused on ensuring transparency and accountability by requiring companies to allow independent experts to review and test AI technologies and make results available publicly.

Later in April, Representative Yvette Clarke, D-N.Y., introduced a bill that would require the disclosure of AI-generated content in political ads.

The Biden Administration announced on May 4 that it will invest $140 million in funding to launch seven new National AI Research Institutes, which investment will bring the total number of Institutes to 25 across the country.

Popular Tags