Creative lawyers will complement artificial intelligence, but our profession’s structures inhibit creativity, which I suggest is to its peril.
I often criticize my students for seemingly striving to reach merely “Wikipedia-level” legal knowledge. In truth, I don’t blame them.
When I ask my students whether they’d prefer to leave law school with a deep understanding of contract law and receive a B on their transcript or receive an A but barely understand more about contract law than they did before entering law school, the honest ones always select the second option. Makes sense.
We’ve built a system that prioritizes knowing the answer over knowing how to get to the answer. The potential disruption between these two points may see our profession incurring debts that will come due in the form of a bar populated by lawyers ill equipped to handle foreseeable changes to our profession in the form of AI, non-binary social issues and attacks on freedoms, to name only a few. The Law Society of Ontario, law firms and law schools are all to blame.
Transcripts drive hiring. Everyone knows that if you don’t meet a certain GPA, a law firm won’t offer you an interview, let alone a job.
Why? Law firms are populated with lawyers who know better than anyone that grades are hardly a measure of success or true legal knowledge.
Lawyers know that, in many cases, all a student has to do is regurgitate class notes (be they correct or not) to an all-too-willing professor. We’ve all known students who were great exam writers but hardly great legal thinkers, just as we’ve known great legal thinkers who couldn’t figure out the exam writing game. Why do lawyers forget this when they assess a candidate’s potential? Why rely on markers known to hardly reflect a candidate’s true capacity and potential?
Within a two- to three-hour block, law school exams want candidates to “spot” 15 to 20 issues and offer solutions. I’d like to meet a lawyer who has ever had a client offer a set of facts consisting of 20 issues and then ask that lawyer to spot and solve them in fewer than two to three hours.
It’s hard to imagine a less suitable method to test an individual’s capacity to become a great lawyer than the method the Law Society of Ontario uses — multiple-choice questioning.
Multiple-choice questions raise many concerns. But by far the most pernicious effects is that they reinforce an absolutist way of thinking by suggesting only one answer exists to a given problem. Absolutism is anathema to the legal profession. Or at least it should be.
Consider the mindset of a multiple-choice question, which is the format used on the Law Society of Ontario’s licensing examinations. A basic set of facts assumes an alleged breach of contract for a delivery truck rental. Candidates must choose which option best describes the alleged breach. This could be a breach of condition, breach of warranty, breach of good faith or a last choice, which is that contracts can never be breached.
For our sake, assume that “a breach of condition” is the best answer. Without getting into the distinction between warranties and conditions as contractual terms, it is enough to say that, when the answer is a condition, the likely alternative answer is warranty — or “breach of warranty” in this case.
In multiple-choice testing, one candidate selects “breach of warranty” when the answer is “breach of condition.” Another candidate selects the option that “contracts can never be breached” (which is an absurdly wrong answer). However, both candidates are equally wrong and deemed equally unqualified. This deeming is preposterous and yet is precisely how the law society assesses its candidates.
Recently, Dr. Neil deGrasse Tyson offered a piercing parable admonishing the state of science in the United States. Currently, the science world prioritizes knowing the answer instead of prioritizing knowing how to get to the answer, the latter defining intelligence and driving innovation. The legal profession should learn from his warning.
Tyson’s parable is simple. A prospective employer asks two candidates vying for the same job a simple set of questions. The first few questions are typically mundane, biographical types until the decisive question is asked — the question determining which candidate lands the job.
“Did you notice the spiral sitting atop our building?” the interviewer asks.
“Yes,” answers the first candidate. The candidate is then asked how tall the spiral is.
“It’s 25 feet,” says the candidate. “During my undergraduate studies in architecture, I memorized all spirals sitting atop our city’s buildings.”
The interviewer is impressed.
The second candidate’s interview goes the same way until the fated question is asked. Then, she answers, “No, but give me five minutes.”
During those five minutes, she walks outside and notices how the sun casts a shadow of the spiral and measures that shadow’s distance against the distance her own shadow casts. She returns to the interview and declares, “I’m not positive, but I’d guess somewhere between 24 and 26 feet.” The job goes to the first candidate.
Skilled lawyers find original solutions to both new and old problems. Skillful lawyers succeed by knowing how to think — they don’t memorize what to think. But candidates who best memorize information get jobs. Yet, AI can memorize information better than humans. But it can’t yet think about information.
So, who is to blame for this state of affairs? Law firms claim law schools don’t produce “lawyer-ready” graduates. Law schools point out that GPAs and interviews designed to elicit sycophantic responses don’t make candidates care about being lawyer-ready.
Both views are irrelevant in the face of AI. Who will blink first?
Professor Anthony Daimsis is director of the national program, a program that leads to a dual JD/LL.L degree, at the University of Ottawa Faculty of Law. He is also director of the common law's mooting program.