Brookings Panelists Emphasize Importance of Addressing Biases in Artificial Intelligence Technology
June 19, 2020 — “The potential for discrimination increases with each generation of technology,” said Nicol Turner Lee, a fellow at the Center for Technology Innovation, in a Friday webinar hosted by the Brookings Institution about the intersection of race, artificial intelligence and structural ine
Jericho Casper
June 19, 2020 — “The potential for discrimination increases with each generation of technology,” said Nicol Turner Lee, a fellow at the Center for Technology Innovation, in a Friday webinar hosted by the Brookings Institution about the intersection of race, artificial intelligence and structural inequalities.
AI systems hold an incredible amount of power, as they increasingly play an important role in decisions between who starves and who eats, who has housing and who remains homeless, who receives healthcare and who is sent home, and which neighborhoods are policed, panelists said.
Machine learning algorithms have become routinely utilized in decisions regarding housing, healthcare, employment, policing and the administration of public programs. But AI systems are not free of prejudice and tend to replicate and solidify the discriminatory biases of their creators.
Panelists discussed the matrix of biases that AI applications could potentially perpetuate, as the technology continues to remain largely unregulated.
“When we think about algorithms, they have to come from somewhere,” said Rashawn Ray, a David M. Rubenstein Fellow. “People think that computers are free of biases, but humans created them.”
“It’s critical to ask who is at the table in the design of these models,” said Dariely Rodriguez, director of the Economic Justice Project.
If certain demographics are excluded from the initial algorithmic design process, that will impact the ultimate technology released.
Systemic discrimination has resulted in underrepresentation of Black individuals in supervisory and executive roles, Rodriguez said.
“Only 5 percent of PhDs awarded this year went to Black women and only 3.5 percent went to Black men,” said Fay Cobb Payton, a professor of information technology and business analytics at North Carolina State University.
Panelists agreed that greater representation of diverse voices in the construction process of AI technology will improve the chances of an equitable end design.
“I think what often happens is developers approach creating algorithms in a color blind way, thinking if they don’t think about race, it won’t become an issue,” said Ray. “However, in order to create inclusive technologies, we have to center race in the models we create.”
According to Ray, Black men and women are 33 percent less likely to trust facial recognition than white populations. This mistrust is well-founded; one study found that AI facial recognition technology misidentified over one-third of Black women, compared to 1 percent of white men.
Not only are disparities built into AI technology, but the technology itself is disproportionately weaponized against Black individuals.
Ray highlighted an incident that took place on the University of North Carolina’s campus, when protests over the removal of Confederate statues brought two opposing groups of demonstrators together. The police used geofencing warrants, which allow police to collect GPS information about devices in specific areas, only on the group of protestors calling for the removal of the statue.
“All these technologies are being used on protestors right now,” said Ray. “Law enforcement is utilizing vast sources of AI technology to surveille.”
Looking forward, the panelists suggested certain steps to make AI a more democratic technology.
Payton called for the utilization of “small data” to train algorithms to better understand lived human experiences.
Ray argued that safeguards need to be put in place to ensure that technology is doing what it is programmed to do.
Finally, Rodriguez called for increased transparency and regulation in the development and implementation of such technologies.
“AI can be harnessed for good — we need to create AI that is equitable, fair, and inclusive,” Lee concluded.