WASHINGTON, January 14, 2020 - Artificial intelligence algorithms need to be regulated to combat racial and gender biases, said Michael Kearns and Aaron Roth in a Tuesday presentation at the Brookings Institution.
Kearns, co-author of The Ethical Algorithm: The Science of Socially Aware Algorithm Design, said replacing bias-prone humans with computers does not solve racism. He gave the example of biased law enforcement; if police are engaging in racist profiling, then they are feeding racist data into the system.
Indeed, eliminating bias in an algorithm doesn’t necessarily mean the model will not be racially incongruent.
Roth, his co-author, said companies already have the tools they need to minimize racial and gender biases. For example, a racial bias against African Americans in a health product existed because the company did not have the data they needed..
Roth said the company could have improved fairness by gathering better data and using a better suited proxy for the data needed.
There isn’t one definition for fairness, and seeking to improve fairness comes at some cost. For example, Kearns said the health company in the above example would have to spend money on collecting better data if it was not readily available.
Regulation of AI is needed in order to create a “level playing field” with big technology companies, he said. Regulators need to think more like the giants they are regulating, and regulators need better skills to audit algorithms, both authors said.
In response to the moderator asking what data scientists should do to enforce fairness, Kearns said some small changes could yield improvement. Data scientists can focus on the constrained optimization issues. Companies have the ability to make these adjustments, but it comes down to them having the will to do so.
Asked by an audience member about what can be done to correct incorrect information about a user,
Kearns and Roth agreed on more transparency.
Having the opportunity and capacity to view the information that is input into algorithms would allow users to know what the companies see.
- Algorithms Can Assist With the ‘Infodemic’, But Have Limitations, Says Center for Data Innovation
- Section 230 is Essential and Broadly Misunderstood, Say Panelists at Broadband Breakfast Live Online Event
- Contact Tracing App Can Assist in Reopening Localities Safely, According to AI Task Force Panelists
- Breakfast Media Minute: July 9, 2020
- Verizon CEO Hans Vestberg Describes 5G-to-the-Home Vision, Claiming U.S. Leads in 5G Deployment
Signup for Broadband Breakfast
Fiber1 month ago
Fiber Networks Hold a Cybersecurity Advantage Over Rival Co-Axial and Wireless Technologies, Say Panelists
Congress1 month ago
Senators Introduce Healthcare Broadband Bill as House Companion, Proposes $2 Billion Telehealth Expansion
Artificial Intelligence3 weeks ago
Brookings Panelists Emphasize Importance of Addressing Biases in Artificial Intelligence Technology
Congress1 month ago
Partisan Disagreement Delays Broadband Funding That Might Come Through HEROES Act
Artificial Intelligence1 week ago
U.S. State Department Employing Artificial Intelligence Against COVID-19 Misinformation
Broadband Roundup1 week ago
Artificial Intelligence Task Force, State Cybersecurity, ADTRAN Offers Rural Funding Guidance
#broadbandlive3 weeks ago
Broadband Breakfast Live Online on Wednesday, June 17, 2020 – Federal Broadband Funds and Opportunity Zones
Expert Opinion1 month ago
Gary Bolton: Under the Stress of COVID-19, the Networks That Held Fast Were Symmetrical Fiber Broadband