AI Experts Want to End 'Black Box' Algorithms in Government

Researchers at AI Now say algorithms increasingly used by government can be opaque and discriminatory.
Image may contain Graphics Art Chess Game Triangle Urban Town Metropolis Building and City
Getty Images

The right to due process was inscribed into the US constitution with a pen. A new report from leading researchers in artificial intelligence cautions it is now being undermined by computer code.

Public agencies responsible for areas such as criminal justice, health, and welfare increasingly use scoring systems and software to steer or make decisions on life-changing events like granting bail, sentencing, enforcement, and prioritizing services. The report from AI Now, a research institute at NYU that studies the social implications of artificial intelligence, says too many of those systems are opaque to the citizens they hold power over.

The AI Now report calls for agencies to refrain from what it calls “black box” systems opaque to outside scrutiny. Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated. Such systems are expected to get more complex as technologies such as machine learning used by tech companies become more widely available.

“We should have equivalent due-process protections for algorithmic decisions as for human decisions,” Crawford says. She says it can be possible to disclose information about systems and their performance without disclosing their code, which is sometimes protected intellectual property.

Governments increasingly lean on algorithms and software to make decisions and set priorities. Sometimes, as in the case of setting bail, it can make government more equitable. But other algorithms have been found to exhibit bias. ProPublica reported last year that a scoring system used in sentencing and bail by multiple states was biased against black people.

Whatever the ultimate impact, citizens struggle to access information about algorithms with sway over their lives. In June, the Supreme Court declined to review a ruling from Wisconsin’s highest court that denied a defendant’s request to learn the workings of a tool called COMPAS used to set his criminal sentence. A project by legal scholars that used open-records laws to seek information about algorithms and scoring systems used in criminal justice and welfare in 23 states came back largely empty handed. In some cases, governments signed agreements with commercial providers restricting disclosure of any information about a system and how exactly it was being used.

AI Now’s call for a rethink of government use of algorithms is one of 10 recommendations in the 37-page report, which surveys recent research on the social consequences of advanced-data analytics in areas such as the labor market, socioeconomic inequality, and privacy.

The group also recommends that companies work on tools and processes to identify biases in training data, which have been shown to create software with unsavory tendencies. And the report calls for research and policymaking to ensure the use of automated systems in hiring doesn’t discriminate against individuals or groups. Goldman Sachs and Unilever have used technology from startup HireVue that analyzes the facial expressions and voice of job candidates to advise hiring managers. The startup says its technology can be more objective than humans; Crawford says such technology should be subject to careful testing, with the results made public.

But changes in how governments use algorithms to shape citizens’ lives could be slow to arrive. Ellen Goodman, a law professor at Rutgers who has studied the subject, says many cities and state agencies lack the expertise needed to design their own systems, or properly analyze and explain those brought in from outside.

The AI Now report comes amid other calls for a more considered approach to using algorithms in public life.

On Sunday the UK government released a review that examined how to grow the country’s AI industry. It includes a recommendation that the UK’s data regulator develop a framework for explaining decisions made by AI systems.

On Monday New York’s City Council debated a bill that would require city agencies to publish the source code of algorithms used to target individuals with services, penalties, or police resources.

On Tuesday a European Commission working group on data protection released draft guidelines on automated decision making, including that people should have the right to challenge such decisions. The group’s report cautioned that “automated decision-making can pose significant risks for individuals’ rights and freedoms which require appropriate safeguards.” Its guidance will feed into a sweeping new data protection law due to come into force in 2018, known as the GDPR.

It appears unlikely that the US federal government will join efforts to engage with concerns about the effects and use of algorithms and AI in public life.

In 2016, the Obama administration held a series of workshops around the country on the benefits and risks of artificial intelligence. AI Now cohosted one of them with the White House’s Office of Science and Technology Policy, and Economic Council. Neither organization now seems interested in subject. The OSTP now has a fraction of the staff it did under the Obama administration. “AI policy is not at the top of the current White House’s agenda,” says Crawford.