Connect with us

Artificial Intelligence

Government Needs to Step Up and Regulate AI Algorithms, Argue Authors at Brookings Institution

Published

on

Screenshot of the moderator (left), and Michael Kearns and Aaron Roth at Brookings Institution

WASHINGTON, January 14, 2020 – Artificial intelligence algorithms need to be regulated to combat racial and gender biases, said Michael Kearns and Aaron Roth in a Tuesday presentation at the Brookings Institution.

Kearns, co-author of The Ethical Algorithm: The Science of Socially Aware Algorithm Design, said replacing bias-prone humans with computers does not solve racism. He gave the example of biased law enforcement; if police are engaging in racist profiling, then they are feeding racist data into the system.

Indeed, eliminating bias in an algorithm doesn’t necessarily mean the model will not be racially incongruent.

Roth, his co-author, said companies already have the tools they need to minimize racial and gender biases. For example, a racial bias against African Americans in a health product existed because the company did not have the data they needed..

Roth said the company could have improved fairness by gathering better data and using a better suited proxy for the data needed.

There isn’t one definition for fairness, and seeking to improve fairness comes at some cost. For example, Kearns said the health company in the above example would have to spend money on collecting better data if it was not readily available.

Regulation of AI is needed in order to create a “level playing field” with big technology companies, he said. Regulators need to think more like the giants they are regulating, and regulators need better skills to audit algorithms, both authors said.

In response to the moderator asking what data scientists should do to enforce fairness, Kearns said some small changes could yield improvement. Data scientists can focus on the constrained optimization issues. Companies have the ability to make these adjustments, but it comes down to them having the will to do so.

Asked by an audience member about what can be done to correct incorrect information about a user,

Kearns and Roth agreed on more transparency.

Having the opportunity and capacity to view the information that is input into algorithms would allow users to know what the companies see.

Adrienne Patton was a Reporter for Broadband Breakfast. She studied English rhetoric and writing at Brigham Young University in Provo, Utah. She grew up in a household of journalists in South Florida. Her father, the late Robes Patton, was a sports writer for the Sun-Sentinel who covered the Miami Heat, and is for whom the press lounge in the American Airlines Arena is named.

Artificial Intelligence

Int’l Ethical Framework for Auto Drones Needed Before Widescale Implementation

Observers say the risks inherent in letting autonomous drones roam requires an ethical framework.

Published

on

Timothy Clement-Jones was a member of the U.K. Parliament's committee on artificial intelligence

WASHINGTON, January 14, 2020 – Artificial intelligence algorithms need to be regulated to combat racial and gender biases, said Michael Kearns and Aaron Roth in a Tuesday presentation at the Brookings Institution.

Kearns, co-author of The Ethical Algorithm: The Science of Socially Aware Algorithm Design, said replacing bias-prone humans with computers does not solve racism. He gave the example of biased law enforcement; if police are engaging in racist profiling, then they are feeding racist data into the system.

Indeed, eliminating bias in an algorithm doesn’t necessarily mean the model will not be racially incongruent.

Roth, his co-author, said companies already have the tools they need to minimize racial and gender biases. For example, a racial bias against African Americans in a health product existed because the company did not have the data they needed..

Roth said the company could have improved fairness by gathering better data and using a better suited proxy for the data needed.

There isn’t one definition for fairness, and seeking to improve fairness comes at some cost. For example, Kearns said the health company in the above example would have to spend money on collecting better data if it was not readily available.

Regulation of AI is needed in order to create a “level playing field” with big technology companies, he said. Regulators need to think more like the giants they are regulating, and regulators need better skills to audit algorithms, both authors said.

In response to the moderator asking what data scientists should do to enforce fairness, Kearns said some small changes could yield improvement. Data scientists can focus on the constrained optimization issues. Companies have the ability to make these adjustments, but it comes down to them having the will to do so.

Asked by an audience member about what can be done to correct incorrect information about a user,

Kearns and Roth agreed on more transparency.

Having the opportunity and capacity to view the information that is input into algorithms would allow users to know what the companies see.

Continue Reading

Artificial Intelligence

Deepfakes Could Pose A Threat to National Security, But Experts Are Split On How To Handle It

Experts disagree on the right response to video manipulation — is more tech or a societal shift the right solution?

Published

on

Rep. Anthony Gonzalez, R-Ohio

WASHINGTON, January 14, 2020 – Artificial intelligence algorithms need to be regulated to combat racial and gender biases, said Michael Kearns and Aaron Roth in a Tuesday presentation at the Brookings Institution.

Kearns, co-author of The Ethical Algorithm: The Science of Socially Aware Algorithm Design, said replacing bias-prone humans with computers does not solve racism. He gave the example of biased law enforcement; if police are engaging in racist profiling, then they are feeding racist data into the system.

Indeed, eliminating bias in an algorithm doesn’t necessarily mean the model will not be racially incongruent.

Roth, his co-author, said companies already have the tools they need to minimize racial and gender biases. For example, a racial bias against African Americans in a health product existed because the company did not have the data they needed..

Roth said the company could have improved fairness by gathering better data and using a better suited proxy for the data needed.

There isn’t one definition for fairness, and seeking to improve fairness comes at some cost. For example, Kearns said the health company in the above example would have to spend money on collecting better data if it was not readily available.

Regulation of AI is needed in order to create a “level playing field” with big technology companies, he said. Regulators need to think more like the giants they are regulating, and regulators need better skills to audit algorithms, both authors said.

In response to the moderator asking what data scientists should do to enforce fairness, Kearns said some small changes could yield improvement. Data scientists can focus on the constrained optimization issues. Companies have the ability to make these adjustments, but it comes down to them having the will to do so.

Asked by an audience member about what can be done to correct incorrect information about a user,

Kearns and Roth agreed on more transparency.

Having the opportunity and capacity to view the information that is input into algorithms would allow users to know what the companies see.

Continue Reading

Artificial Intelligence

Complexity, Lack of Expertise Could Hamper Economic Benefits Of Artificial Intelligence

Artificial intelligence is said to open up a new age of economic development, but its complexity could hamper its rollout.

Published

on

Keith Strier of NVIDIA

WASHINGTON, January 14, 2020 – Artificial intelligence algorithms need to be regulated to combat racial and gender biases, said Michael Kearns and Aaron Roth in a Tuesday presentation at the Brookings Institution.

Kearns, co-author of The Ethical Algorithm: The Science of Socially Aware Algorithm Design, said replacing bias-prone humans with computers does not solve racism. He gave the example of biased law enforcement; if police are engaging in racist profiling, then they are feeding racist data into the system.

Indeed, eliminating bias in an algorithm doesn’t necessarily mean the model will not be racially incongruent.

Roth, his co-author, said companies already have the tools they need to minimize racial and gender biases. For example, a racial bias against African Americans in a health product existed because the company did not have the data they needed..

Roth said the company could have improved fairness by gathering better data and using a better suited proxy for the data needed.

There isn’t one definition for fairness, and seeking to improve fairness comes at some cost. For example, Kearns said the health company in the above example would have to spend money on collecting better data if it was not readily available.

Regulation of AI is needed in order to create a “level playing field” with big technology companies, he said. Regulators need to think more like the giants they are regulating, and regulators need better skills to audit algorithms, both authors said.

In response to the moderator asking what data scientists should do to enforce fairness, Kearns said some small changes could yield improvement. Data scientists can focus on the constrained optimization issues. Companies have the ability to make these adjustments, but it comes down to them having the will to do so.

Asked by an audience member about what can be done to correct incorrect information about a user,

Kearns and Roth agreed on more transparency.

Having the opportunity and capacity to view the information that is input into algorithms would allow users to know what the companies see.

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

 

Trending