Should the Federal Government Regulate Artificial Intelligence?
Two experts were on opposite sides of the debate about how to mitigate the downsides of AI.
Riley Haight
WASHINGTON, July 12, 2022 – Representatives from academia and a nonprofit diverged at a Bipartisan Policy Center event Tuesday about whether the government should step in and minimize problems associated with artificial intelligence, including bias and discrimination in algorithms.
“We really do want actors to help us establish national and international guidelines,” said Miriam Vogel, president, and CEO of EqualAI, a nonprofit that seeks to reduce bias in AI. “We are driving full speed without lanes, without speed limits to manage the expectations.”
While acknowledging the benefits of AI in society today, Vogel said its algorithms present risk that often leads to bias and discrimination. She shared the example of how facial recognition misses certain voices or skin tones.
AI is used in various sectors and powers algorithms that cater services to individuals. Panelists referenced the use of AI algorithms in suspect identification for criminal justice, in disease diagnosis in health care, and for movie and employment recommendations.
Vogel said regulation will establish clear expectations for AI companies to minimize such risks.
Adam Thierer, a senior research fellow at the Mercatus Center at George Mason University, said he is “a little skeptical that we should create a regulatory AI structure” and instead proposed educating workers on how to set best practices for risk management. He called this an “educational institution approach.”
He said that because of how long federal law takes to enact, he wants to reach AI workers directly, such as the computer programmers and AI innovators “of tomorrow” to do a better job of “baking best practices” into AI.
“I think baking best practice principles in by design begins with an educational focus,” said Thierer.
Thierer said he wants to give this job to trusted third parties to suggest pathways forward, including ethical evaluations and consultations with AI companies. He said that when it comes to AI rules across different sectors, “we don’t need one overarching standard to rule them all.”
Thierer added that because of how fast AI is changing, “it can’t go through the same regulatory process.” He argued if regulation is put in place, we will lose AI innovators.
Vogel disagreed with Thierer, saying she doesn’t believe that there is a risk of losing innovators with regulating AI, and instead, said, “I see regulation is the partner to innovation.”
She said that because there is no government regulation for AI, companies are left to do it themselves if they choose, referencing the Badge Program at EqualAI that seeks to help companies navigate risks.
“We need to have a governance system put in place to make sure continual testing is taking place,” said Vogel.