Connect with us

Artificial Intelligence

Biased Artificial Intelligence has Sinister Consequences for Marginalized Communities, Argue Panelists

Adrienne Patton

Published

on

Photo by Mike MacKenzie of VPNsrus.com used with permission

WASHINGTON, February 13, 2020 – Biased artificial intelligence poses obstacles for marginalized communities when trying to access financial services like applying for a mortgage loan, said panelists speaking before the House Committee on Financial Services.

In a statement before the committee on Wednesday, privacy and AI advisor Bärí A. Williams wrote, “Data sets in financial services are used to determine home ownership and mortgage, savings and student loan rates; the outcomes of credit card and loan applications; credit scores and credit worthiness, and insurance policy terms.”

In practice, biased AI could mean that “black homeowners were confined to specific areas of a city and that their credit worthiness led to higher interest rates,” Williams said.

Rayid Ghani, of the Machine Learning Department at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, said that it is not enough to create an equitable AI. Rather, there needs to be “equity across the entire decision-making process.”

“Machine bias is not inevitable, nor is it final,” concurred Brookings Institution Fellow Makada Henry-Nickie.

“This bias though, is not benign. AI has enormous consequences for racial, gender, and sexual minorities,” said Henry-Nickie.

University of Pennsylvania Professor Michael Kearns said biased AI is “generally not the result of human malfeasance, such as racist or incompetent software developers.”

However, Williams argued that if AI is being fed “historical data,” its already biased.

In order to create an equal AI system, Ghani included steps to an equitable process in the actual construction of AI. Ghani suggested:

  • “Detecting biases in intermediate/iterative versions of the system.”
  • “Understanding the root causes of the biases.”
  • “Improving the system by reducing the biases (if possible) or selecting tradeoffs across competing objectives.”
  • “Mitigating the impact (and coming up with an overall mitigation plan) of the residual biases of the system.”

Adrienne Patton was a Reporter for Broadband Breakfast. She studied English rhetoric and writing at Brigham Young University in Provo, Utah. She grew up in a household of journalists in South Florida. Her father, the late Robes Patton, was a sports writer for the Sun-Sentinel who covered the Miami Heat, and is for whom the press lounge in the American Airlines Arena is named.

Artificial Intelligence

Staying Ahead On Artificial Intelligence Requires International Cooperation

Benjamin Kahn

Published

on

Screenshot from the webinar

WASHINGTON, February 13, 2020 – Biased artificial intelligence poses obstacles for marginalized communities when trying to access financial services like applying for a mortgage loan, said panelists speaking before the House Committee on Financial Services.

In a statement before the committee on Wednesday, privacy and AI advisor Bärí A. Williams wrote, “Data sets in financial services are used to determine home ownership and mortgage, savings and student loan rates; the outcomes of credit card and loan applications; credit scores and credit worthiness, and insurance policy terms.”

In practice, biased AI could mean that “black homeowners were confined to specific areas of a city and that their credit worthiness led to higher interest rates,” Williams said.

Rayid Ghani, of the Machine Learning Department at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, said that it is not enough to create an equitable AI. Rather, there needs to be “equity across the entire decision-making process.”

“Machine bias is not inevitable, nor is it final,” concurred Brookings Institution Fellow Makada Henry-Nickie.

“This bias though, is not benign. AI has enormous consequences for racial, gender, and sexual minorities,” said Henry-Nickie.

University of Pennsylvania Professor Michael Kearns said biased AI is “generally not the result of human malfeasance, such as racist or incompetent software developers.”

However, Williams argued that if AI is being fed “historical data,” its already biased.

In order to create an equal AI system, Ghani included steps to an equitable process in the actual construction of AI. Ghani suggested:

  • “Detecting biases in intermediate/iterative versions of the system.”
  • “Understanding the root causes of the biases.”
  • “Improving the system by reducing the biases (if possible) or selecting tradeoffs across competing objectives.”
  • “Mitigating the impact (and coming up with an overall mitigation plan) of the residual biases of the system.”

Continue Reading

Artificial Intelligence

Connectivity Will Need To Keep Up With The Advent Of New Tech, Says Expert

Samuel Triginelli

Published

on

Screenshot from the webinar

WASHINGTON, February 13, 2020 – Biased artificial intelligence poses obstacles for marginalized communities when trying to access financial services like applying for a mortgage loan, said panelists speaking before the House Committee on Financial Services.

In a statement before the committee on Wednesday, privacy and AI advisor Bärí A. Williams wrote, “Data sets in financial services are used to determine home ownership and mortgage, savings and student loan rates; the outcomes of credit card and loan applications; credit scores and credit worthiness, and insurance policy terms.”

In practice, biased AI could mean that “black homeowners were confined to specific areas of a city and that their credit worthiness led to higher interest rates,” Williams said.

Rayid Ghani, of the Machine Learning Department at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, said that it is not enough to create an equitable AI. Rather, there needs to be “equity across the entire decision-making process.”

“Machine bias is not inevitable, nor is it final,” concurred Brookings Institution Fellow Makada Henry-Nickie.

“This bias though, is not benign. AI has enormous consequences for racial, gender, and sexual minorities,” said Henry-Nickie.

University of Pennsylvania Professor Michael Kearns said biased AI is “generally not the result of human malfeasance, such as racist or incompetent software developers.”

However, Williams argued that if AI is being fed “historical data,” its already biased.

In order to create an equal AI system, Ghani included steps to an equitable process in the actual construction of AI. Ghani suggested:

  • “Detecting biases in intermediate/iterative versions of the system.”
  • “Understanding the root causes of the biases.”
  • “Improving the system by reducing the biases (if possible) or selecting tradeoffs across competing objectives.”
  • “Mitigating the impact (and coming up with an overall mitigation plan) of the residual biases of the system.”

Continue Reading

Artificial Intelligence

AI the Most Important Change in Health Care Since Introduction of the MRI, Say Experts

Samuel Triginelli

Published

on

Screenshot from the webinar

WASHINGTON, February 13, 2020 – Biased artificial intelligence poses obstacles for marginalized communities when trying to access financial services like applying for a mortgage loan, said panelists speaking before the House Committee on Financial Services.

In a statement before the committee on Wednesday, privacy and AI advisor Bärí A. Williams wrote, “Data sets in financial services are used to determine home ownership and mortgage, savings and student loan rates; the outcomes of credit card and loan applications; credit scores and credit worthiness, and insurance policy terms.”

In practice, biased AI could mean that “black homeowners were confined to specific areas of a city and that their credit worthiness led to higher interest rates,” Williams said.

Rayid Ghani, of the Machine Learning Department at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, said that it is not enough to create an equitable AI. Rather, there needs to be “equity across the entire decision-making process.”

“Machine bias is not inevitable, nor is it final,” concurred Brookings Institution Fellow Makada Henry-Nickie.

“This bias though, is not benign. AI has enormous consequences for racial, gender, and sexual minorities,” said Henry-Nickie.

University of Pennsylvania Professor Michael Kearns said biased AI is “generally not the result of human malfeasance, such as racist or incompetent software developers.”

However, Williams argued that if AI is being fed “historical data,” its already biased.

In order to create an equal AI system, Ghani included steps to an equitable process in the actual construction of AI. Ghani suggested:

  • “Detecting biases in intermediate/iterative versions of the system.”
  • “Understanding the root causes of the biases.”
  • “Improving the system by reducing the biases (if possible) or selecting tradeoffs across competing objectives.”
  • “Mitigating the impact (and coming up with an overall mitigation plan) of the residual biases of the system.”

Continue Reading

Recent

Signup for Broadband Breakfast

Get twice-weekly Breakfast Media news alerts.
* = required field

Trending