WASHINGTON, February 20, 2020— Measures designed to assess the likelihood of an individual’s recidivism, and to consequently decide whether a defendant awaiting trial can be released before trial, are ineffective.
That was the conclusion of panelists at a Brookings Institution event titled “AI, Predictive Analytics, and Criminal Justice.” They generally agreed that “we have really poorly designed measures” of assessment.
All panelists took turns dumping on new technologies designed by companies hired by the government to assess recidivism probabilities.
These algorithms took into account personal factors such as number of prior arrests and severity.
Sakira Cook, director of the Justice Reform Program at The Leadership Conference on Civil and Human Rights, roundly attacked the fairness of these tools in three ways, disputing the claim that they aid law enforcement in making decisions regarding bail. First, she outlined the distinction between “crime data,” which is the term that is often used by industry know-hows, and “arrest data,” which is the more accurate term, since the dataset includes arrests by law enforcement that don’t lead to conviction.
Secondly, Cook criticized the lack of transparency surrounding these tools. “Certain government agreements” with licensors prevent researchers or activists from accessing the proprietary data without signing a Non-Disclosure Agreement. “These tools are not transparent,” contended Cook.
Thirdly, and more broadly, Cook belittled AI tools by attacking the foundation they rest on— the supposed legitimacy of U.S. criminal law. “We agree that we have to change the fundamental laws of this country,” said Cook, referencing the disproportionate amount of black men who are arrested and incarcerated in the U.S.
Faye Taxman, Ph.D., professor at George Mason University, agreed with the notion of a major judicial review. However, she made a point to defend the idea of these AI-based tools.
The tools’ biggest problems, Taxman asserted, are the variables being fed into the algorithms.
She gave the hypothetical example of a 30-year-old who is identified by the technology as having a “drug problem” for smoking marijuana in high school. These “lifetime variables” are one of several foul data points that need to be edited out of future generations of tools.
Taxman also emphasized the importance of tools over nothing, referencing how the first generation of “tools” were just prison psychologists in the 1920s who decided the likelihood of inmate recidivism based on personal hunches and flimsy science.
“Do we need instruments at all? As a scientist, I say we need instruments,” said Taxman.
- Online Speech Has Harmful Effects on Both Individuals and Society, According to Mary Anne Franks
- Pandemic Has Created an Environment for Consumer Fraud, Say Congressional Leaders
- Breakfast Media Minute: July 10, 2020
- Metrics and Automation Can Improve Federal Cybersecurity Measures
- Federal Communications Commission Must Reconsider Ligado Offer, Says Former Commissioner
Signup for Broadband Breakfast
Artificial Intelligence3 weeks ago
Brookings Panelists Emphasize Importance of Addressing Biases in Artificial Intelligence Technology
Artificial Intelligence2 weeks ago
U.S. State Department Employing Artificial Intelligence Against COVID-19 Misinformation
Congress1 month ago
Partisan Disagreement Delays Broadband Funding That Might Come Through HEROES Act
Broadband Roundup2 weeks ago
Artificial Intelligence Task Force, State Cybersecurity, ADTRAN Offers Rural Funding Guidance
#broadbandlive4 weeks ago
Broadband Breakfast Live Online on Wednesday, June 17, 2020 – Federal Broadband Funds and Opportunity Zones
Expert Opinion1 month ago
Gary Bolton: Under the Stress of COVID-19, the Networks That Held Fast Were Symmetrical Fiber Broadband
Education2 weeks ago
A Mix of Resources and Technologies Are Needed to Close the Homework Gap
Fiber1 month ago
Bandwidth Demands Project 10 Gigabit Network Capabilities Required Next Decade