Biased Artificial Intelligence has Sinister Consequences for Marginalized Communities, Argue Panelists

WASHINGTON, February 13, 2020 – Biased artificial intelligence poses obstacles for marginalized communities when trying to access financial services like applying for a mortgage loan, said panelists speaking before the House Committee on Financial Services. In a statement before the committee on Wed

Biased Artificial Intelligence has Sinister Consequences for Marginalized Communities, Argue Panelists
Photo by Mike MacKenzie of VPNsrus.com used with permission

WASHINGTON, February 13, 2020 – Biased artificial intelligence poses obstacles for marginalized communities when trying to access financial services like applying for a mortgage loan, said panelists speaking before the House Committee on Financial Services.

In a statement before the committee on Wednesday, privacy and AI advisor Bärí A. Williams wrote, “Data sets in financial services are used to determine home ownership and mortgage, savings and student loan rates; the outcomes of credit card and loan applications; credit scores and credit worthiness, and insurance policy terms.”

In practice, biased AI could mean that “black homeowners were confined to specific areas of a city and that their credit worthiness led to higher interest rates,” Williams said.

Rayid Ghani, of the Machine Learning Department at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, said that it is not enough to create an equitable AI. Rather, there needs to be “equity across the entire decision-making process.”

“Machine bias is not inevitable, nor is it final,” concurred Brookings Institution Fellow Makada Henry-Nickie.

“This bias though, is not benign. AI has enormous consequences for racial, gender, and sexual minorities,” said Henry-Nickie.

University of Pennsylvania Professor Michael Kearns said biased AI is “generally not the result of human malfeasance, such as racist or incompetent software developers.”

However, Williams argued that if AI is being fed “historical data,” its already biased.

In order to create an equal AI system, Ghani included steps to an equitable process in the actual construction of AI. Ghani suggested:

  • “Detecting biases in intermediate/iterative versions of the system.”
  • “Understanding the root causes of the biases.”
  • “Improving the system by reducing the biases (if possible) or selecting tradeoffs across competing objectives.”
  • “Mitigating the impact (and coming up with an overall mitigation plan) of the residual biases of the system.”

Popular Tags