Councils invest $1.3 million in legal-focused AI project OpenJustice

The project is being led by Queen’s Law professor Samuel Dahan

Councils invest $1.3 million in legal-focused AI project OpenJustice

The Natural Sciences and Engineering Research Council and Social Sciences and Humanities Research Council have invested over $1.3 million in legal-focused no-code AI platform OpenJustice.

NSERC has injected $462,160 into the project through its Alliance Advantage and Mitacs Accelerate grants, while SSHRC provided $400,000 through its Insight Grant program. Project partners also contributed funds.

Queen’s Law professor Samuel Dahan leads the project, and the team will apply the investment to improve the reliability, customizability, and accessibility of legal AI models. OpenJustice was developed by the Queen’s Conflict Analytics Lab in 2023 and has been trained on legal systems from Canada, the US, and France.

CAL has teamed up with law schools, courthouse libraries, and self-help centers like Pro Bono Canada to glean feedback from users on enhancing the platform. Meanwhile, users are given open access to legal information. Dahan and his team have also collaborated with law firms and tech companies to trial the OpenJustice framework in the context of legal practice and legal aid.

Miller Thomson LLP will work with CAL on the design and delivery of a customized AI model trained on proprietary data in line with the firm’s use cases, according to Queen’s Law. Compliance and employment law-focused tech company Deel will trial OpenJustice tools in predictive worker classification and in automating the process of retrieving relevant legal insights from case law.

Project collaborator Anton Ovchinnikov will collaborate with the Scotiabank Centre in the Smith School of Business on progressing applications in the financial sector. The OpenJustice project team will also work with members of the academe at Stanford and McGill universities.

OpenJustice was made open source during a Legal AI Hackathon hosted by CAL and the Stanford Center for Legal Informatics in February; since then, users have created over 40 custom models.

“It’s one thing to develop AI models to support basic productivity tasks such as redrafting or creating summaries, but it is much more challenging to develop models that can address the essential components of being a lawyer: digging deep into the state of the law, understanding what courts say about a problem, investigating whether there is divergence of interpretation – this is not the type of data you can get from a textbook,” Dahan said in a statement originally published in Queen’s Gazette. “This unique co-creation functionality of OpenJustice will allow experts to build guardrails to ensure that AI models are reliable in terms of technical, ethical and legal boundaries.”

He added that OpenJustice is expected to reduce legal fees and research costs as well as improve access to justice by boosting the percentage of litigants who are able to access adequate representation.