Updates to our Terms of Use

We are updating our Terms of Use. Please carefully review the updated Terms before proceeding to our website.

Tuesday, July 2, 2024 | Back issues
Courthouse News Service Courthouse News Service

Can algorithms help judges make decisions?

A new paper argues algorithms would be more accurate than judges at predicting whether or not arrestees show up to their first court appearance.

(CN) — Can algorithms improve human decision-making by taking out bias-prone factors like age, race and gender? It's not a particularly new debate, but it's one that's gained renewed attention lately amid a new generation of artificial intelligence and large language models.

Now, a new paper — published Tuesday in the Quarterly Journal of Economics — looks at New York City judges' decisions regarding pretrial detention in an attempt to ascertain whether machine learning could help them make better calls.

"At least 20% of judges make systematic prediction mistakes about misconduct risk given defendant characteristics," Ashesh Rambachan estimates in his paper. Those characteristics include factors like race that could lead to biased outcomes.

In other words: Replacing human judges with cold, hard algorithms would be 20% more accurate in determining whether someone returns to court — though Rambachan is careful to stress that depending on the goals of the criminal justice system, such an outcome would not necessarily be better.

Rambachan is an assistant professor of economics at MIT. For his study, he analyzed 758,027 judicial decisions on pretrial detention in New York City.

"I find that the decisions of at least 32% of judges in New York City are inconsistent with expected utility maximization," Rambachan writes. Put another way, at least 32% of these judges might find it helpful to incorporate machine learning into their bond decisions.

The decision of whether or not to release an arrestee is often based on a variety of factors, including the risk that they may not show up for the next scheduled court appearance or may commit another crime.

Those risks are balanced against another set of risks — those faced by arrestees themselves. An arrestee may be innocent and could face serious harm from even a short stay in jail. For example, the arrestee might be fired, forced to stop taking medication or evicted for nonpayment of rent while they're locked up.

How much weight should be given to these different risks is the subject of continued debate all over the world. Rambachan's paper, however, focuses on one narrow metric: whether or not an arrestee that is released shows up for their next court appearance.

Cathy O'Neil is a data scientist and author of "Weapons of Math Destruction." She says algorithms are designed to answer simple questions but often ignore the more important ones.

"The original hype around these algorithms is to reduce racism and bias," O'Neil said — but "the studies I’ve read do not make the case that that’s happened." Instead, when judges use algorithms as guides, "typically these algorithms tell judges to be more lenient," she added. Troublingly, "there are studies that show that judges only listen to algorithms for white defendants."

Ultimately, O'Neil says the debate of when arrestees should be jailed or released is a philosophical and political one. It's hard to write an algorithm for something when there isn't already consensus among humans.

"We need to have a conversation about these things," O'Neil says. "An algorithm is the opposite of a conversation."

In his paper, Rambachan admits that fully replacing judges with algorithm-based decision-making "has ambiguous effects that depend on the policy maker’s objective."

Rather than advocating for hyperintelligent machine judges, Rambachan said his goal was simply to develop "an econometric framework for testing whether a decision maker makes systematic prediction mistakes in high-stakes settings like pretrial release" hearings.

The paper comes at time when courts — and indeed, most professions — are bracing to be upended in some way by artificial intelligence and large language models. Earlier this month, the Judicial Council of California announced it would form a task force to consider the use of AI in the state's courts. One judge noted that AI might be used to improve court administration, enhance research and perhaps even "reduce subconscious human biases."

In his paper, Rambachanm acknowledges the coming AI revolution. "These foundational questions have renewed policy relevance," he writes, "as machine learning–based models increasingly replace or inform decision makers in criminal justice, health care, labor markets, and consumer finance."

Follow @hillelaron
Categories / Law, Science, Technology

Subscribe to Closing Arguments

Sign up for new weekly newsletter Closing Arguments to get the latest about ongoing trials, major litigation and hot cases and rulings in courthouses around the U.S. and the world.

Loading...