Government Needs to Step Up and Regulate AI Algorithms, Argue Authors at Brookings Institution

WASHINGTON, January 14, 2020 – Artificial intelligence algorithms need to be regulated to combat racial and gender biases, said Michael Kearns and Aaron Roth in a Tuesday presentation at the Brookings Institution. Kearns, co-author of The Ethical Algorithm: The Science of Socially Aware Algorithm De

Government Needs to Step Up and Regulate AI Algorithms, Argue Authors at Brookings Institution
Screenshot of the moderator (left), and Michael Kearns and Aaron Roth at Brookings Institution

WASHINGTON, January 14, 2020 – Artificial intelligence algorithms need to be regulated to combat racial and gender biases, said Michael Kearns and Aaron Roth in a Tuesday presentation at the Brookings Institution.

Kearns, co-author of The Ethical Algorithm: The Science of Socially Aware Algorithm Design, said replacing bias-prone humans with computers does not solve racism. He gave the example of biased law enforcement; if police are engaging in racist profiling, then they are feeding racist data into the system.

Indeed, eliminating bias in an algorithm doesn’t necessarily mean the model will not be racially incongruent.

Roth, his co-author, said companies already have the tools they need to minimize racial and gender biases. For example, a racial bias against African Americans in a health product existed because the company did not have the data they needed..

Roth said the company could have improved fairness by gathering better data and using a better suited proxy for the data needed.

There isn’t one definition for fairness, and seeking to improve fairness comes at some cost. For example, Kearns said the health company in the above example would have to spend money on collecting better data if it was not readily available.

Regulation of AI is needed in order to create a “level playing field” with big technology companies, he said. Regulators need to think more like the giants they are regulating, and regulators need better skills to audit algorithms, both authors said.

In response to the moderator asking what data scientists should do to enforce fairness, Kearns said some small changes could yield improvement. Data scientists can focus on the constrained optimization issues. Companies have the ability to make these adjustments, but it comes down to them having the will to do so.

Asked by an audience member about what can be done to correct incorrect information about a user,

Kearns and Roth agreed on more transparency.

Having the opportunity and capacity to view the information that is input into algorithms would allow users to know what the companies see.