Criminal Justice Reform Advocates Agree that Current AI Assessment Tools are Garbage, But Differ on How to Proceed
WASHINGTON, February 20, 2020— Measures designed to assess the likelihood of an individual’s recidivism, and to consequently decide whether a defendant awaiting trial can be released before trial, are ineffective. That was the conclusion of panelists at a Brookings Institution event titled “AI, Pred
David Jelke
WASHINGTON, February 20, 2020— Measures designed to assess the likelihood of an individual’s recidivism, and to consequently decide whether a defendant awaiting trial can be released before trial, are ineffective.
That was the conclusion of panelists at a Brookings Institution event titled “AI, Predictive Analytics, and Criminal Justice.” They generally agreed that “we have really poorly designed measures” of assessment.
All panelists took turns dumping on new technologies designed by companies hired by the government to assess recidivism probabilities.
These algorithms took into account personal factors such as number of prior arrests and severity.
Sakira Cook, director of the Justice Reform Program at The Leadership Conference on Civil and Human Rights, roundly attacked the fairness of these tools in three ways, disputing the claim that they aid law enforcement in making decisions regarding bail. First, she outlined the distinction between “crime data,” which is the term that is often used by industry know-hows, and “arrest data,” which is the more accurate term, since the dataset includes arrests by law enforcement that don’t lead to conviction.
Secondly, Cook criticized the lack of transparency surrounding these tools. “Certain government agreements” with licensors prevent researchers or activists from accessing the proprietary data without signing a Non-Disclosure Agreement. “These tools are not transparent,” contended Cook.
Thirdly, and more broadly, Cook belittled AI tools by attacking the foundation they rest on— the supposed legitimacy of U.S. criminal law. “We agree that we have to change the fundamental laws of this country,” said Cook, referencing the disproportionate amount of black men who are arrested and incarcerated in the U.S.
Faye Taxman, Ph.D., professor at George Mason University, agreed with the notion of a major judicial review. However, she made a point to defend the idea of these AI-based tools.
The tools’ biggest problems, Taxman asserted, are the variables being fed into the algorithms.
She gave the hypothetical example of a 30-year-old who is identified by the technology as having a “drug problem” for smoking marijuana in high school. These “lifetime variables” are one of several foul data points that need to be edited out of future generations of tools.
Taxman also emphasized the importance of tools over nothing, referencing how the first generation of “tools” were just prison psychologists in the 1920s who decided the likelihood of inmate recidivism based on personal hunches and flimsy science.
“Do we need instruments at all? As a scientist, I say we need instruments,” said Taxman.