The right to due process was inscribed into the US constitution with a pen. A new report from leading researchers in artificial intelligence cautions it is now being undermined by computer code.

Public agencies responsible for areas such as criminal justice, health, and welfare increasingly use scoring systems and software to steer or make decisions on life-changing events like granting bail, sentencing, enforcement, and prioritizing services. The report from AI Now, a research institute at NYU that studies the social implications of artificial intelligence, says too many of those systems are opaque to the citizens they hold power over.

The AI Now report calls for agencies to refrain from what it calls “black box” systems opaque to outside scrutiny. Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated. Such systems are expected to get more complex as technologies such as machine learning used by tech companies become more widely available.

“We should have equivalent due-process protections for algorithmic decisions as for human decisions,” Crawford says. She says it can be possible to disclose information about systems and their performance without disclosing their code, which is sometimes protected intellectual property.

Governments increasingly lean on algorithms and software to make decisions and set priorities. Sometimes, as in the case of setting bail, it can make government more equitable. But other algorithms have been found to exhibit bias. ProPublica reported last year that a scoring system used in sentencing and bail by multiple states was biased against black people.

Whatever the ultimate impact, citizens struggle to access information about algorithms with sway over their lives. In June, the Supreme Court declined to review a ruling from Wisconsin’s highest court that denied a defendant’s request to learn the workings of a tool called COMPAS used to set his criminal sentence. A project by legal scholars that used open-records laws to seek information about algorithms and scoring systems used in criminal justice and welfare in 23 states came back largely empty handed. In some cases, governments signed agreements with commercial providers restricting disclosure of any information about a system and how exactly it was being used.

AI Now’s call for a rethink of government use of algorithms is one of 10 recommendations in the 37-page report, which surveys recent research on the social consequences of advanced-data analytics in areas such as the labor market, socioeconomic inequality, and privacy.

The group also recommends that companies work on tools and processes to identify biases in training data, which have been shown to create software with unsavory tendencies. And the report calls for research and policymaking to ensure the use of automated systems in hiring doesn’t discriminate against individuals or groups. Goldman Sachs and Unilever have used technology from startup HireVue that analyzes the facial expressions and voice of job candidates to advise hiring managers. The startup says its technology can be more objective than humans; Crawford says such technology should be subject to careful testing, with the results made public.

But changes in how governments use algorithms to shape citizens’ lives could be slow to arrive. Ellen Goodman, a law professor at Rutgers who has studied the subject, says many cities and state agencies lack the expertise needed to design their own systems, or properly analyze and explain those brought in from outside.

The AI Now report comes amid other calls for a more considered approach to using algorithms in public life.