AI has the potential to vastly change the way that humans interact, not only with the digital world, but also with each other, through their work and through other socioeconomic institutions – for better or for worse. To this end, some partners fear that artificial intelligence is a double-edged sword: the rise of automation could lead to more mistakes by junior lawyers in the future.
In case you don’t know, Artificial Intelligence (AI) is a rapidly advancing technology, made possible by the Internet, that may soon have significant impacts on our everyday lives.
AI traditionally refers to an artificial creation of human-like intelligence that can learn, reason, plan, perceive, or process natural language. Hannah Roberts of Law.com International says that combing through endless documents to find a small clause relevant to their transaction has been a rite of passage for lawyers for time immemorial.
In an industry where one small mistake or oversight in a document can be the difference between the success and failure of multi-million transactions, lawyers have historically been suspicious of glitzy new technologies.
It is into this framework that artificial intelligence providers have long been trying to make their mark, offering promises of greater efficiency when it comes to drafting clauses and automating time-consuming—and dull—legal processes.
But despite a steady increase in the number of top law firms who are using AI to redirect their junior lawyers’ attentions to less menial tasks, there are some in the industry who remain skeptical of AI for one key reason—they say it could lead to an increase in mistakes made by these junior lawyers further down the line.
This is because they might not have the same grounding in law that their predecessors have established through years of careful attention to detail.
An associate at a top transatlantic firm explains that AI can be very helpful in defining terms and referencing clauses. “You would normally give that work to a trainee and it would take a few hours to do it, and they wouldn’t do it particularly well. Now it can be done in a matter of minutes.”
“I think it’s going to keep being the case where software is going to be developed to get rid of mundane tasks, but there are always going to be slightly different aspects to individual deals, so what I typically tend to say to trainees is that they can use AI to check things, but they need to do the manual one too.”
The perks for trainees, he says, is that without needing to proofread maybe 30 pages of documents, they can now hope to draft a couple of clauses relating to a transaction. To do so, he says, means that they still understand the basis of the law, but are taking it one step further to apply their knowledge – something they may not have been doing so soon after joining a firm, had AI not been available.
“There are pros and cons, of course, but I think the pros outweigh the cons,” adds the associate. WHAT IS THE NAME OF THE ASSOCIATE
“If you look at the way people used to draft, where one person was doing it, they would have a call, agree what to do, then write it up and mark it… All of that has now gone, but all that means is that people may expect deals to be done quicker, as opposed to creating questions around whether lawyers will lose skills because of AI.”
Jason Brennan, the acting CEO of legal AI company, Luminance, says that they often hear from clients that AI is “actually necessary in order to cope with exponential data and to retain young talent.”
“When some partners began their careers, the number of contracts to review in a transaction was reasonable but the explosion of corporate documentation has meant that legal teams are now faced with ever increasing volumes of documents to review in impossible timeframes.
“Expecting junior lawyers to manually read this number of near identical contracts is unrealistic and can lead to ineffective sampling and subpar advice for clients. By reading millions of documents instantly and highlighting areas of risk or opportunity, AI is able to expedite this process, leaving the all-important decisions to the lawyers.”
Many factors contribute to the challenges faced by stakeholders with the development of AI, including:
• Decision-making: transparency and “interpretability”.
• Data Quality and Bias.
• Safety and Security.
• Social and Economic Impact.
The positive and negative impacts of AI and automation on the labor market and the geographical division of labor will not be without their own challenges. For instance, if AI becomes a concentrated industry among a small number of players or within a certain geography, it could lead to greater inequality within and between societies. Inequality may also lead to technological distrust, particularly of AI technologies and of the Internet, which may be blamed for this shift.
The financial services partner adds that the teaching of junior lawyers about the practicalities of — and the risks of relying too heavily upon — new technologies is ultimately the responsibility of their supervisors.
The key thing, he says, is that lawyers across the board are able to not only recognize the limitations of AI, but to impress upon junior lawyers the consequences of a mistake. That has been a feature of the legal industry for long enough, and is unlikely to change any time soon.