AI use in U.S. criminal justice reinforced racialized decision-making: Law Commission of Ontario

Technology’s development needs to include broad involvement by stakeholders, says report

AI use in U.S. criminal justice reinforced racialized decision-making: Law Commission of Ontario
Nye Thomas, executive director of the Law Commission of Ontario.

If Canada decides to implement AI and algorithmic decision-making in the criminal justice system, a study of the experience south of the border shows the development of these systems must involve a wide array of stakeholders, not a few developers working behind closed doors, says Nye Thomas, executive director of the Law Commission of Ontario.

The LCO just released the first of three issue papers on the use of AI and algorithms in the Canadian justice system, The Rise and Fall of AI and Algorithms in American Criminal Justice:  Lessons for Canada.

“The experience in other jurisdictions has shown that the use of these tools, whether it's in criminal law, or in administrative law, is really, really, really controversial,” says Thomas.

AI, algorithms and automated decision-making are “expanding rapidly” in justice systems across the globe, said the paper. Both the U.S. and UK criminal justice systems are augmented with AI, and while Canada is exploring the use of this technology, it is not yet in operation, says Thomas.

The LCO developed a comprehensive framework for governing AI use in the criminal justice system, says Thomas. The LCO report looked at the experience in the U.S. to come up with necessary preconditions for the implementation of the technology in Canada. Required are certain disclosures and due-process protections, among other safeguards, he says.

“We essentially try to learn lessons from United States,” Thomas says. “We try to avoid the mistakes that were made there, to give some advice to Canadian policy-makers thinking about these tools about how to do it properly. Or, if they should do it at all. It's not inevitable that these tools will be introduced. But if they are to be introduced, you have to meet certain preconditions, which I believe we lay out thoughtfully in the report.”

In the U.S. experience, AI technology has largely reinforced the problems it was introduced to solve.

AI-based risk-assessment tools were not contemplated as a law-and-order nor cost-cutting tool, says Thomas. The technology was promoted as a tool to reduce the racism and racialized decision-making in U.S. criminal justice by identifying and unpacking subjective decision-making by police, courts, judges and prosecutors, which had resulted in African Americans being penalized much more heavily than whites, he says.

“The tools were an attempt to mitigate or lessen racial bias in American criminal justice. That was their intent,” says Thomas. “What people found once they began to use these tools and the use of them began to expand, quite broadly, is that there were a lot of problems – problems that they didn't anticipate when they introduced the tools in in the first place.”

“There may be a temptation for Canadian policymakers to look at these tools the same way as Americans did, as potentially a way to reduce systemic bias in criminal justice. But if they do, they have to think about the policies, rules and regulations that we recommend in our report.”

The U.S. problems, generally, either related to data or due process, he says.

If the tools are trained with arrest, sentencing and other historical criminal justice data, the data will crystalize generations of racialized policing and racialized U.S. court decisions, says Thomas.

The other problem concerned due process. For example, in bail hearings the algorithmic predictions on a person’s probability of recidivism had the history of racialized decision-making baked in. And once the probabilities were calculated, these predictions would be “very, very difficult to challenge,” says Thomas.

“So they implemented these systems and there weren’t the appropriate protection procedures in place to allow people to protect their rights and to challenge an appeal and to understand the recommendations that were being made by these algorithmic systems,” he says.

The LCO will follow the issue paper with two more. Regulating AI: An International Survey will look at current efforts to regulate AI and algorithms in government decision-making. AI, Algorithms and Government Decision-Making will cover AI and algorithm use in civil and administrative law decision-making, such as determining welfare entitlements, administrative proceedings and government investigations.

Related stories

Free newsletter

Our newsletter is FREE and keeps you up to date on all the developments in the Ontario legal community. Please enter your email address below to subscribe.

Recent articles & video

Law Society motion would lower annual fees for new lawyers struggling to find work

U of T trial advocacy course preparing students for virtual courtrooms

Legal Aid Ontario begins consultations to modernize Legal Aid Online

Margaret Sims becomes counsel to Law Foundation of Ontario’s Class Proceedings Fund

After removing exemption, LSO votes down motion to reduce annual fees for retired lawyers over 65

Pilot explores how social work students can help fill funding gaps in community legal clinics

Most Read Articles

Law Society passes fee reduction, COVID-19 relief, some Benchers say measures fall short

New differential privacy algorithm allows for secure sharing of machine-learning legal tech: lawyer

Lawyer suspended, allegedly mishandled and misappropriated trust funds, lied to LSO

Ontario Review Board cannot order accused to attend Zoom hearing: Court