Senate Subcommittee Hears Testimony on Risks of AI Management
Panelists said that AI regulations are necessary to promote safety.
Ari Bertenthal
WASHINGTON, Sept. 17, 2024 - A group of AI experts said in a Tuesday hearing before a Senate subcommittee that there is a need for AI oversight as development progresses rapidly.
In opening statements before the Senate Subcommittee on Privacy, Technology and the Law, AI experts expressed concern for the dangers presented by the fast-paced development of AI in conjunction with what they characterize as a serious lack of regulation by the federal government.
“Security was not prioritized when I was at OpenAI,” said ex-OpenAI researcher William Saunders. “There were long periods of time where there were vulnerabilities that would have allowed me (...) to bypass access controls and steal the company's most advanced AI systems.”
Saunders was joined on the panel by Helen Toner, director of strategy at Georgetown University’s Center for Security and Emerging Technology; David Harris, University of California Berkeley Chancellor’s Public Scholar; and computer scientist Margaret Mitchell, formerly with Google AI.
Toner specifically cited the rise of artificial general intelligence, or AGI, as an issue of particular concern.
The term AGI is generally used to describe artificial intelligence that is roughly as smart, or smarter, than humans. A key concern raised by panelists was that AGI platforms have the capacity for intentional deception or withholding information.
Toner noted that AGI is not a matter of science fiction, saying that it is only one to three years away from becoming a reality, something that she said would be extraordinarily disruptive at a minimum, and outright dangerous at a maximum.
“[These systems] could lead to literal extinction,” said Toner. “[They] are affecting hundreds of millions of people, even in the absence of scientific consensus about how they work or what will be built next.”
Subcommittee Chairman Sen. Richard Blumenthal, D-Conn., raised another concern – that AI services are being used to create explicit content of unconsenting individuals, including children.
Blumenthal, with ranking member Sen. Josh Hawley, R-Mo., stipulated that federal regulation was key in preventing these uses of AI. He pointed to the bipartisan Kids Online Safety Act as a method of holding AI platform companies accountable.
The panelists voiced throughout their testimonies that federal and state regulations were essential.
Harris said in his testimony that limited regulations were ineffective, and that ‘mays’ must be replaced with ‘shalls’ in federal regulatory language.