CCC Releases Opinion Piece in the Communications of the ACM on Responsible Use of AI in the Criminal Justice System

Like
Liked

Date:

Read Time: min

Artificial intelligence (AI) is rapidly becoming a part of our daily lives, from drafting emails to recommending your next binge-watch. But what happens when AI is used to make decisions that profoundly impact a person’s life and liberty? CCC’s new CACM opinion piece, “Concerning the Responsible Use of AI in the U.S. Criminal Justice System,” explores just that.

 

The piece, originally based on CCC’s response to the National Institute of Justice’s 2024 Request for Information, was authored by a collective of distinguished computer scientists and researchers, including many CCC council members and individuals involved with other CRA committees. 

 

The opinion piece begins by emphasizing the potential positive impacts of AI’s implementation in the criminal justice system, of which there are many. AI could improve efficiency by analyzing large amounts of data (e.g. criminal records, evidence, legal documents, etc.) and possibly make connections humans may overlook due to the volume of data. AI can also free up members of law enforcement and the judiciary to allow them to focus on more strategic and interesting work. Finally, though it is well known that AI can perpetuate biases due to the inclusion of biased training data, AI has the potential to actually reduce bias in bail and sentencing decisions, if certain factors, such as race or income, are excluded from its inputs. However, the potential for benefit does not negate or outweigh the potential for harm. 

The Dangers of Opaque Systems and a Call for Transparency and Accountability

 

The authors stress that an AI system must not be a “black box.” Opaque, proprietary AI systems might be acceptable for movie recommendations or speech translation, but they have no place in a system that values individual rights and due process. For defendants, judges, and attorneys to have confidence in the system, they must understand how the AI arrives at its conclusions. This means knowing what data it’s trained on, where that data comes from, and the logic it uses to produce recommendations. Without this transparency, the AI becomes an “accuser the defendant cannot face, a witness they cannot cross-examine.”

 

The authors also highlight the importance of educating legal professionals on how these systems work, so they can properly contest or support AI-generated findings. Judges who are unfamiliar with how AI systems make recommendations may interpret a system’s recommendation as fact, rather than as an approximate evaluation of how individuals with certain shared factors, such as number of prior convictions or failures to appear, typically behave. Judges should remain open to arguments made by the defense and prosecution, whose arguments often include valuable information not considered by an AI system. 

 

The Need for Standardization and Regular Evaluation

 

The authors point out that many AI tools fail to distinguish between felony and misdemeanor convictions when assessing risk. Many AI tools also provide abstract labels, such as “high risk”, or sometimes even just a color (e.g. green, yellow, red) without providing specific, quantitative estimates for risk assessment. The authors note that research has shown humans tend to overestimate the probability of negative events when given categorical labels, making quantitative data much more useful and accountable. These measures also fail to convey the level of certainty that an AI model has in its estimation. If a given defendant has an unusual history and the model has very little relevant data to rely on in making an estimate, a color or label of “high” or “low” risk does not communicate to a judge how reliable this prediction is.

 

The authors also strongly emphasize the importance of careful deployment of these systems in new jurisdictions and regular testing and evaluation. Due to differing demographics and policing practices, a model which is fair and effective in a given jurisdiction may produce biased results when deployed in a new jurisdiction. A system’s performance can also change over time as new diversion programs take effect or demographics shift. The authors point to California law, which requires pretrial risk assessment algorithms to be audited every three years, and suggest that states should adopt similar regulations to require periodic testing.

Looking Ahead

 

The article concludes with a powerful reminder: the law is a dynamic, human-centered system built on thousands of years of precedent. While AI can be a powerful tool, we must not sacrifice core values like accountability and fairness for the sake of technological advancement. 

 

Read the full opinion column on the Communications of the ACM website.

 

 

spot_img
spot_img
spot_img
spot_img

Related articles

spot_img
spot_img
spot_img