Moving faster: the emergence and implications of AI in the legal industry

18 May 2023 Editorial team

AI is making waves across multiple industries, and the legal field is no exception. The emergence of advanced AI technology that can work magic with language is creating opportunities for speeding up routine tasks and reclaiming time for higher-level work, like building client relationships.

As the CTO of Safelink, Karl Anderson, puts it: “Anyone, in any industry, who primarily works with words and language will have the option to embrace better tools that supplement or supercharge their work, and lawyers, in particular, have a lot to gain."

Becoming more productive through AI sounds appealing, but what will it look like? And how do legal professionals feel about this technological landslide themselves? Below, we provide some insights into the matter.

 

AI, GPT-powered AI, neural networks

Before diving into AI and legal technology, let’s first clarify what we’re dealing with. There are AI, GPT-powered AI, neural networks and LLMs, but what are the differences, and why does it matter? Karl talks us through it.

“A remarkable set of abilities has emerged from ‘neural networks,’ Karl says. “These abilities are remarkable because they’re built from not much more than addition, subtraction, and exponentiation. Inspired by the inter-connected neurons in the human brain, AI researchers combined these mathematical elements in increasingly elaborate ways and ever greater numbers, incrementally unlocking new abilities and surprising even themselves with the results.”

He continues: “State-of-the-art ‘large language models’ (LLMs) like OpenAI's GPT include blocks of mathematical operations that enable them to focus their ‘attention’ on parts of a given text, and other blocks to transform information in complicated ways. During a training phase, without any human guidance, the networks learn what to focus on and what transformations to apply, with the internal goal of predicting how a passage might be completed. In doing so, they unavoidably learn patterns: English grammar, story structures, clichés, writing styles, and now, common patterns of reasoning and logical deduction.”

On top of this, we can apply human guidance to the LLMs through a process called reinforcement learning. They’re coaxed into producing the results we want to see, such as palatable answers to questions, accurate classifications or useful distillations of some input.

As a result, LLMs now ‘understand’ language to a sufficient degree that they can translate, summarise, categorise, write, and (with varying degrees of success) respond to instructions, explain themselves, and reason, all by applying patterns learned during training. Karl: “LLMs do have limitations, but when trained correctly and employed selectively, LLMs let us create better tools that help us to find and digest information. As a result, anyone who works with words and language will have tools that supercharge their work, and law falls squarely into this category.”

 

Promising LLM applications

There are quite some AI applications that can help lawyers, paralegals and associates in their daily work. But how exactly? Looking at the capabilities of GPT-powered AI today, there are four immediate applications:

 

  1. Summarisation, where advanced language models are proving capable of delivering "abstractive" rather than "extractive" summaries that concisely convey the most important parts of the information.
  2. Conceptual search allows for a more comprehensive retrieval of information by finding results whose meanings are close substitutes to a given query.
  3. Document construction, where the "generative" capabilities of LLMs can be used to create documents to order, with far more power than template engines of the past.  Through limited amounts of "fine-tuning", standard constructions can be interwoven with particulars, the firm's knowledge base and even a preferred tone of voice.
  4. Legal assistants that offer chat-like interfaces to drive other actions, including interrogating matter information, knowledge bases and legal resources, running searches, and constructing documents.

 

Karl says: “One of the powerful applications of LLMs is to help people to distil information and find answers faster. This will be particularly useful in litigation, where lawyers can use AI-enabled tools to comb through large amounts of data, such as emails and other electronic documents, to identify important information quickly. In turn, this helps speed up analysis, support early case assessment and reduce the time spent on manual review.”

In addition to litigation, LLMs will open the door to far more powerful corporate law contract review tools. Karl says: “Instead of manually reviewing each contract, corporate lawyers can use AI to quickly identify potential issues, such as non-standard clauses or missing provisions in shareholder agreements. This saves significant time and resources while reducing the risk of errors.” 

“I’ve spoken with many lawyers over the years, and they tell me that legal research can be time-consuming and arduous,” he says. “Applying AI-powered tools that make finding facts, statutes, and precedents faster will be compelling in legal research. Even tooling that makes it less likely that you miss an idea has an obvious benefit.”

 

How do lawyers feel about the rise of LLMs?

Many of the benefits from this advancing AI are yet to be seen. But how do lawyers feel about the much narrower application of LLMs that they do have access to, namely ChatGPT? That we do know, thanks to Thomson Reuters Institute.

In April 2023, they surveyed 443 mid-sized and large law firms in the U.S., U.K. and Canada. They found that 91% of respondents were aware of ChatGPT and generative AI, with 82% noting that the technology can be applied to legal work. Then again, only 52% of respondents said ChatGPT and generative AI should be used in legal work. The answers differed per role and seniority level. Especially those in leadership roles said to be concerned about security risks. While 80% of partners or managing partners saw risks of using the technology at work, only 56% of lawyers and 44% of associates said the same.

It also turned out that, even though awareness of GPT-powered AI applications is high, the technology is still rarely used in law firms. The Thomson Reuters Institute found that only 3% of all respondents currently use the technology.

 

Will security concerns ruin the party?

Looking at the results from the survey, part of the hesitance in the legal sector around the adoption of GPT will be grounded in security concerns. This is understandable, as there’s a lot of mystery surrounding how AI models handle data and store information. 

Anyone using GPT-powered tools, of which there’s an ever-increasing number, should be aware that the information they enter is being sent to OpenAI's servers in the United States, often via intermediaries, and that the information could be retained along the way. This raises questions. Does the data include PII (Personally identifiable information)? Does it include commercially sensitive information? Has your client agreed to their information being processed in this way? Do all of the providers in the chain provide terms and technical security measures that protect information sufficiently?

Karl says: “One solution to the security challenges is to work with an experienced technology provider who can help apply LLM technologies while still respecting regulatory requirements and confidentiality obligations. We’ve been working with lawyers for over a decade and understand the concerns, so now we’re well-placed to find a path to unlocking these new technologies while over-delivering on the security standards that we all expect.”

He adds: “Right now, we’re in a Wild West phase, where many people have access to all sorts of new tools in a very raw and unfiltered way. It enables people to paste in any kind of information and interact with systems like ChatGPT that could reveal confidential information. The immediate response to this should be to offer guidelines and train staff on the safe use of these tools, if they’re allowed at all. As applications mature, what we’ll end up doing is refining the interfaces so that people have narrower ways to interact with LLMs. This makes it less likely that confidentiality breaches will occur and more likely that the tools will be used correctly and that the results will be of high quality.” 

 

Try out GPT-powered AI on the Safelink platform

Having read about neural networks, LLMs and their benefits for the legal sector, you may want to give AI a try. The good news is that you can now experience it with Safelink. We’ve enabled GPT-powered AI throughout our platform, subject to an opt-in, including within our Expero virtual data rooms and Lexiti eReview tools. The first iteration of our AI integration automatically provides a brief summary of every document you upload, giving you an 'at a glance' overview of the document content. No more scrolling, no more skimming.

 

We have many plans for LLMs and are ready to put this first stage into your hands. Want to join us on the journey as we harness this new kind of fire? `Request access to our beta version here.

Editorial team - Profile Picture

Editorial team

The start of most cases is a bit like a tsunami - an oncoming wave of inevitable documents, calls and emails. A wave that partners and...
Read More - The power of early case assessment in litigation
On Monday, the US Federal Trade Commission (FTC) agreed a settlement of up to $700m for a class action brought against Equifax. This...
Read More - Learning points from the "entirely preventable" Equifax data breach
It’s no secret. In today’s modern business climate, you need a secure and efficient way to manage your data storage. One of the most...
Read More - 10 Benefits of Implementing Virtual Data Room Software for Your Organisation