The Bail-Reform Tool That Activists Want Abolished

They worry that algorithms used to determine a person’s flight risk will only perpetuate racial discrimination.

A sign advertises a Los Angeles bail-bond company. (Mario Tama / Getty)

When Tyler Hubbard was arrested last year in Ocean Township, New Jersey, he was 22 years old. He, in his own words, “got into a big fight” with his stepfather, “and it went somewhere it shouldn’t have.” He was arrested around midnight, charged with second-degree aggravated assault, and taken to the county jail. Hubbard waited there for roughly a day before he, along with about 10 others, was introduced to a pretrial case worker. “[That was when] they told us all our scores,” Hubbard remembered. “I was like a 2 and a 3, I believe.”

These scores were supposed to predict what Hubbard would do next if he were released from jail. They are a key component of New Jersey’s new pretrial system, which was revamped in early 2017 to essentially eliminate cash bail. In the first year, the reforms cut the state’s pretrial population by some 20 percent.

Hubbard said his attorney told him that under the previous regime, his bail would have been set around $50,000, far above what Hubbard could afford. He would have sat in jail until his court date, costing him his job as an automotive technician, which he had only started two weeks before. Instead, Hubbard was evaluated by the Public Safety Assessment (PSA), an algorithm that generated two scores: The first predicted whether he would engage in new criminal activity; the second, whether he would fail to appear in court.

After consulting the scores in a pretrial hearing, a judge decided to release Hubbard with monitoring. He was able to keep his job. “Relieved would be an understatement,” he told me, describing how he’d felt. “It was fantastic.”

As calls to abolish cash bail grow louder across the country—from politicians, activists, and even some in corporate America—an increasing number of jurisdictions are transitioning away from a monetary system to one dependent on risk-assessment algorithms like the PSA. California, a state that has historically set some of the highest bail in the country, passed a bill doing so last month. Risk algorithms can be used at various junctures—from sentencing to parole hearings—but their use pretrial has garnered the most attention. Some have heralded these algorithms as agents of change, tools that help abolish a system that, in the words of Senator Bernie Sanders, creates “modern-day debtors prisons.” New Jersey’s reforms have seemed particularly successful: The Pretrial Justice Institute, a national nonprofit that studies bail practices, has given the state an A score, a grade no other state received.

But there’s a problem: Even as the algorithms are praised for minimizing cash bail and the inequality it creates, an increasing number of civil-rights activists worry they perpetuate racial disparities within the criminal-justice system. In July, more than 100 civil-rights groups, including the ACLU, signed a statement of concern urging jurisdictions to stop using the tools. In the same missive, they outlined how to properly implement the algorithms if states do decide to use them. As more states turn a critical eye toward their own systems—and face public pressure to avoid certain reforms—they may have to decide between implementing a technology that could be corrupted, sticking with a (reformed) cash system, or pursuing a new form of justice that doesn’t depend on either.

Activists argue that the algorithms are fundamentally flawed because the data they use to predict a person’s risk could be influenced by structural racism: The number of times someone has been convicted of a crime, for example, or their failure to appear in court could both be affected by racial bias. As a result, they say, any bias that’s baked into the data is replicated by the algorithms, but with the veneer of scientific objectivity.

“My concern [about using the tools] is that what you could have is essentially racial profiling 2.0,” said Vincent Southerland, the executive director of the Center on Race, Inequality, and the Law at the New York University Law School, which signed onto the statement. “We’re forecasting what some individuals may do based on what groups they’re associated with have done in the past.” Some activists also worry that even in jurisdictions that have adopted the tools in good faith, judges may not follow their suggestions in setting bail or other pretrial conditions, and the new systems may go unscrutinized because communities assume any problems have been fixed.

The push against using algorithms pretrial is relatively new, as bail reform has only recently gained steam. There are many different kinds in use: Forty jurisdictions use the PSA, others use algorithms created by state governments, and still others employ systems developed by for-profit companies or nonprofits.

No matter the algorithms’ origin, activists have questioned the way scores are generated. The PSA, for one, uses a database of some 750,000 cases from more than 300 jurisdictions to identify risk factors. To determine a person’s score, it requires specific information about them: their age, the charge in question, any other pending charges, any prior misdemeanor or felony convictions, if any of those convictions were violent, if they’ve ever failed to appear in court, and any prior prison sentences. Notably, it doesn’t require a person’s race—or their gender, education level, economic status, or neighborhood.

“We think [the concerns expressed in the July statement] are misplaced,” said Jeremy Travis, the executive vice president of criminal justice at the Laura and John Arnold Foundation, which created the PSA. He said he agrees with the groups’ worries about structural racism, but noted that algorithms don’t preclude other improvements to the pretrial system. “Risk assessment is not an impediment to reform,” Travis said. “It can be an avenue to reform.”

The inner workings of other algorithms are less clear, though—even intentionally so. In response to university researchers’ requests for details, many jurisdictions have refused to turn them over, claiming the information is owned by private companies. Perhaps the most well-known risk-assessment tool is COMPAS, which got attention in 2016 after ProPublica published an article claiming its algorithm was biased against black defendants. The for-profit company that created COMPAS denied those allegations, but as The Washington Post wrote, it “refused to disclose the details of its proprietary algorithm, making it impossible to fully assess the extent to which [its technology] may be unfair, however inadvertently.”

The algorithms’ scores aren’t the only thing that concerns activists: They also worry about how actors within the criminal-justice system interpret them. “Predictions serve some sort of purpose, and the purpose they serve is to advise a judge on what’s supposed to be done,” said Logan Koepke, a senior policy analyst at the nonprofit Upturn who studies how scores are implemented.

He pointed to a 2017 study from George Mason University that examined Kentucky’s pretrial risk-assessment system, which was made mandatory in 2011. It found that the algorithm has led to significant changes in bail-setting practices, but only a small increase in actual pretrial release. Furthermore, the study showed that the changes eroded over time, as judges returned to their previous habits.

Koepke’s research has found that pretrial tools might overestimate risk if the data they use are flawed. He gave the example of New York City, which announced in 2017 that it would use data spanning from 2009 to 2015 to redesign its pretrial algorithm. Though a data set that large usually helps correct for any erroneous information, the city’s arrest practices over some of that period—specifically its stop-and-frisk program—were deemed unconstitutional, suggesting the numbers aren’t a reliable guide. Writing in the July statement, civil-rights groups suggested measures jurisdictions can take when adopting risk-assessment tools that could help compensate for data flaws. Among them: prohibit the algorithms from ever recommending detention, give each defendant a hearing with robust due process, make the tools transparent by revealing how scores are generated, and have their creation include community input.

Activists and researchers alike note there are ways to reform a pretrial system without ever adopting a risk-assessment algorithm, such as ending money bail, restricting pretrial detention, and focusing resources on helping defendants return to court. Sakira Cook, the criminal-justice program director at the Leadership Conference on Civil and Human Rights—the organization that shepherded the July statement from civil-rights groups—gave the example of Philadelphia, which has decriminalized many offenses and focused its reforms on pretrial actors such as prosecutors, judges, and the police. New Jersey also underwent a number of reforms when it implemented its algorithm in 2017.

Other stakeholders in the pretrial world, such as Cherise Fanno Burdeen, the CEO of the Pretrial Justice Institute, don’t believe it’s realistic to completely abolish risk assessment. Burdeen pointed out that for decades, the Supreme Court has required courts to conduct these assessments to determine if someone should be released—they’ve just been doing so subjectively. Judges necessarily deem some people unfit for release, and Burdeen says risk-assessment tools can make those rulings more objective and easier to track for bias.

Civil-rights activists argue that one systemwide change could correct for many of their concerns about assessment tools and judicial discretion: the presumption of release. In jurisdictions that have adopted that mindset, such as Washington, D.C, prosecutors and judges have to clear a higher bar in order to keep a person in jail.

Attempting sweeping reforms “may serve to complicate the system that we currently have,” Southerland said. “But I think if we’re dealing with people’s liberty and lives, then that complication is really a low cost to pay.”

As for Tyler Hubbard in Ocean Township, the new pretrial system meant his life was minimally disrupted. He pleaded guilty to third-degree aggravated assault, and was sentenced to two years probation with no jail time. “I’m very thankful for that,” he said.

This article is part of our project “The Presence of Justice,” which is supported by a grant from the John D. and Catherine T. MacArthur Foundation’s Safety and Justice Challenge.

Madeleine Carlisle is an editorial fellow at The Atlantic.