This story appears in the June issue of Utah Business. Subscribe.
The first major news story about the intersection of artificial intelligence and the practice of law was not the one proponents of that emerging technology had hoped for.
Last year, a brief filed by plaintiff’s counsel in Mata v. Avianca — an otherwise unremarkable civil suit — confidently cited multiple instances of case law that, when the defense tried to look them up, could not be found. The judge in the case ordered that the full text of the cited briefs be presented to the court. Counsel promptly complied, attaching multiple pages-long, often complex legal arguments in the expected format, filled with the expected language and references to additional case law.
Normally, that’s where the story would end. Instead, it’s where this story just gets started. The counsel’s source was neither LexisNexis nor Westlaw, but ChatGPT — and every bit of it was fiction. The judge in the case used the opportunity to make an example of the attorneys, who walked away both sanctioned and scathingly rebuked.
“We learned that ChatGPT has the ability to hallucinate case law,” says Sarah Vaughn, a shareholder at Fabian VanCott specializing in civil litigation. “More than just bad practice of law, this is an ethical violation. When you file anything with a court, you are certifying that it has sufficient factual and legal background, which was not the case.”
When the story of AI is written, Mata v. Avianca might be to its development what Three Mile Island was to the development of nuclear power: a harsh wake-up call that revealed profoundly worrying chinks in the armor of a world-changing new technology without being nearly enough to stop it. Indeed, within the notoriously risk-averse legal profession, the accelerating pace of AI adoption is instructive.
The adoption curve
Lawyers have traditionally been among the slowest to adopt and integrate new technologies, according to Cain Elliott, head legal futurist at Filevine.
“It takes three to five years for new legal tech to achieve acceptance, which is considered very slow. Only medicine is slower,” Elliott says, adding that in the technology adoption cycle, acceptance remains several steps removed from adoption and eventual diffusion. “I tell people who want to innovate in the legal space that legal tech demands a hearty constitution. They’d better be ready to play the long game because anybody who thinks they’ll put a widget out and conquer the world in two years is going to be disappointed.”
Elliott says that such intense resistance to change has prevented many new legal technologies from surviving and discouraged the ongoing development of those that do achieve adoption.
“There is a lot of what I call ‘zombie’ legal tech, where a player will have some success, and they decide the safe route is just to carry on and maintain,” he says.
But the tenuous relationship between the legal profession and technology underwent a radical transformation with the recent arrival of generative AI, according to Josh Baxter, CEO at NetDocuments.
“People have been talking about AI and machine learning in the legal industry for about a decade, but the economics didn’t work because you had to look at thousands of examples to train on just one very simple use case. This made the cost of scaling and adoption very high and the probability of success incredibly low,” Baxter says. “LLMs [large language models] really became accessible and reached a level of performance and quality 18 months ago. For the first time, the legal industry is in the lead in adopting new technology, and that’s creating disruption like never before.”
Mentor/machine
Regarding generative AI, the most consistently asked questions deal with what jobs are likely to be rendered redundant by each new development. In the legal tech space, a typical response is that these solutions are akin to having a smart junior attorney at one’s side — meaning AI does the tedious heavy lifting normally carried out by an associate, the product of which would never go out the door without a more senior attorney’s review and sign-off.
In other words, the junior attorneys’ jobs are at risk. What’s less clear is how one becomes the senior attorney without first being the junior.
Jonathan Bench, an attorney at Harris Sliwoski specializing in transactional law, answers that question by turning the premise on its head.
“There’s no prohibition against an attorney fresh out of law school hanging a shingle and starting a practice. What’s traditionally kept that from happening is the leap of faith any potential client would have to take in hiring them. That may change with AI playing the role of senior attorney, providing that training by reviewing and signing off on the junior’s work,” Bench says, noting that such a development would require fundamentally changing law school curriculum to include practical training in AI and how its application would differ depending on whether it’s used in transactions or litigation.
The risks of generative AI in the law go further. Before release, these technologies rely on a foundational level of training provided by exposure to enormous quantities of content, which in turn allows the model to predict what words in what order are most likely to make an effective response to a future user’s query. Post-release, these models continue to evolve by assimilating the substance of future submissions into the framework, allowing that content to inform and possibly be incorporated into subsequent responses.
“If you’re giving the AI information that should be maintained confidential, you’re exposing that information to the world and possibly inadvertently waiving attorney-client privilege,” Vaughn explains.
Baxter says the message sent by the judge in Mata v. Avianca was received loudly and clearly at NetDocuments.
“There’s been an impulse to go and use these technologies, but we believe that first, the right security and governance policies need to be in place,” Baxter says, noting that the LLM partnership behind NetDocuments’ AI offerings stipulates that customer query content be deleted immediately, as opposed to the otherwise standard practice of after 30 days.
AI-powered efficiency
While he concedes that a lot needs to happen before AI assumes a true mentoring role, Bench sees the near future as one where legal practitioners can leverage these technologies to break free of the billable hour in place of a focus on charging for value provided. This, he says, will place quality legal services within reach of those who have long been denied them.
“There are areas where the law has a disproportionate impact on people of fewer means, and the cost of hiring counsel is keeping them in bad situations,” Bench says. “In areas like immigration, family law, lower-level criminal cases, sole proprietors and contract workers, AI isn’t going to displace lawyers in these fields, but it will allow them to accomplish more for more people and do it more efficiently.”
Vaughn believes the unpredictable nature of litigation makes it unlikely that AI will enable firms practicing in that area to replace the billable hour with a flat fee for service. But she does foresee billing fewer hours per client, making up for it by being able to take more on. She says the low-hanging fruit in litigation is the dramatic gain in efficiency and accuracy of AI-powered document review, analysis and retrieval — tasks that had come to consume inordinate amounts of resources due to the surging volume of records subject to the discovery that less intelligent technologies, like email and SMS, generate.
“AI tools mean no more downtime spent looking for that one email you remember seeing two years ago,” Vaughn says. “You can describe the document you’re looking for, and the system finds it. You can even suggest others similar to it.”
Vaughn is referring to one of Premier Legal Technologies’ offerings. Martin Eyre, the company’s general manager, says AI is tackling the document overload problem from two directions.
“This flood of data needs a first-level review just to establish what’s relevant, and this process takes hours and hours when done manually,” he continues. “Letting AI take that first pass, the document count is cut significantly. At the same time, the technology can identify data you may not have even known you had.”
What’s next?
With AI having catapulted the traditionally risk-averse legal industry into the unfamiliar role of early and eager adopter, what other technologies might be waiting to be embraced? As Filevine’s lead legal futurist, Elliott’s job description includes answering that question. Specifically, he sees two very different approaches to legal tech in development, which can be defined by their approach to screens — be it maximalist or minimalist.
The maximalist camp wants screens everywhere.
“Imagine everybody in a law firm with an Apple Vision Pro-style headset on, using that technology to mediate their interactions with the devices, systems and tools they’re using. That’s one direction we could see,” Elliott says.
He adds, perhaps mercifully, that this is not the direction he is betting on. Instead, he’s a screen minimalist.
“I don’t believe that’s how attorneys want to work. I think they want fewer screens and instead want to incorporate voice in a more meaningful way,” Elliott says. “Not like the first round of voice-based assistants like Alexa and Siri, which are good for reading you the weather report but are not agents sophisticated enough to be part of your workday. I see that changing with AI-assisted dictation that fits very well with how attorneys want to review documents, which is by making verbal notes that AI transcribes and flows together with the references and citations it’s finding in the document.”
Elliott points to the Rabbit R1 and Humane AI Pin as examples of voice-interfacing gadgets hinting at a growing desire for more intuitive access to AI without adding more screens.
Reducing burnout
In the near term, Elliott sees AI increasing efficiency beyond the printed word.
“This is going to be the year of multi-modal AI, meaning moving past a pure text focus and onto video from depositions, audio from client calls and even images,” he says. “We’ll also see AI-assisted optical character recognition do a better job understanding documents that are in nonstandard formats, which have been very challenging to interpret so far.”
Ironically, the real opportunity for AI in legal tech appears to be one that makes the profession more human.
“Today, in order for an associate to reach their quota, they have to bill 2,600 hours per year, which means they’re actually putting in around 3,000. That leads to a burnout rate that is well above most any other industry,” Baxter says. “This means people are going to law school, starting to practice and leaving the industry. It’s not sustainable. These technologies will permit firms to bring in a comparable level of revenue in fewer hours, and thus avoid the mental health issues that result.”
Elliott takes Baxter’s thesis one step further, extending the cost of the status quo to society itself.
“How do we use technology to make the legal ecosystem a healthier one overall?” he asks. “High rates of churn and burnout are not good for anybody; they produce too much depression and substance abuse. We have a real opportunity to use technology to treat each other better so we get and keep the best talent and make legal a supercharged component of our economy — not just a cost tacked onto it."