Hyper-Local AI News

If We Achieve Legal AI Perfection – Everything Changes – Artificial Lawyer


Lawyers want genAI tools to give good results. But, what happens when they become almost perfect, i.e. 99.9% accurate, all of the time? That would mark the end of the legal world as we know it.

So You Want Perfection?

At present, as many studies have shown, genAI results across a range of tasks are not perfect. GenAI can also – without proper RAG support – include hallucinations, and even with RAG techniques can still give partial, or potentially misleading, answers that are ‘correct’ in factual terms, but miss key information.

Every legal tech company worth its salt is trying to address this. Meanwhile the foundation model builders from OpenAI to Anthropic are also trying to do better, as well as adding in reasoning and support for agentic flows that could be used to improve on ‘single shot’ answers. I.e. if you can spend more time on an answer, and/or go back to an LLM multiple times, you might get a more accurate output that meets the user’s specific needs.

In short, the legal world wants more accuracy, and the providers are trying to give it. Nice, right? But….what happens next?

For now, lawyers use genAI tools knowing they need to be ‘in the loop’, and that any answer, whether a system providing a redline, a new version of a clause based on a playbook, a summary of a deposition, or case law research, all has to be taken with a degree of scepticism. Not total scepticism. But, still, lawyers not having absolute faith in what appears ‘magically’ on their screen is the standard approach.

But, as David Wang, now head of innovation at Cooley, and others, have said: ‘You can’t have efficiency without accuracy.’ I.e. if you have to spend 30 minutes checking a contract review result because you cannot feel confident of it, then if you save 30 mins on that task using AI you are back to square one: no net improvement.

While if AI gets a case citation – which has to be objectively correct with very little wiggle room – totally wrong, then you are even worse off, as you have to find a new way to get the result. And worse still, what happens if you fail to spot that the answer is a bit ‘off’?

Plus, looking at so many other tech tools, what if your car only drove where you steered it 80% of the time, emails only arrived with the right person ‘quite often’, and items you ordered on Amazon only ‘tended to be what you actually paid for’? Those technologies would be abandoned until they could be fixed.

But, lawyers – ironically for a profession famed for its attention to detail – accept a world where they work with genAI results. Why? Because the legal world is in fact highly-attuned to error spotting and output improvement.

A law firm is not just an economic pyramid of legal labour, it’s a pyramid of quality control. I.e. junior associate work product is checked, it advances up the structure to more senior lawyers, to finally the partners, to then the clients – who also then ‘correct’ the work. It’s a long chain of quality control.

That is why imperfect genAI outputs are tolerated, because perhaps surprisingly this reality meshes well with how things have always been done. Imperfection is part of law firm life.

And thus – double irony here – if genAI finally becomes truly accurate one day, i.e. almost totally reliable and its work never needs to be checked because it’s ‘perfect’, then the legal world as we know it will radically change.

What Happens After We Achieve ‘Legal AI Perfection’?

Let’s say in 10 years, after many advances in LLM deployment – possibly not via more and more compute, but via better reasoning and agentic actions that provide much better outputs – we get to 99.9% accuracy for legal use cases.

Whether it’s drafting, reviewing, or research, legal AI is now – for want of a better word – perfect. Maybe that happens in a decade, maybe longer, but let’s work with the idea that it will happen at some point and within your career.

What happens then? Here are some thoughts.

A junior associate, who already does work that is supported by genAI, will truly see some tasks nearly fully automated. Why? Because those specific tasks can be achieved so perfectly that their human input is in effect ‘a waste of time’.

Lawyers in the loop will still be needed for those routine ‘fully automatable tasks’, but their input will be the ‘lightest of touches’. Plus, the clients and the partners at the law firm will still need someone to blame if things go wrong. So, human lawyers are needed still as ‘blame sponges’ even for basic things.

Even at that automatable level someone still has to press the keys to make things work, to carry out the orders from above, to achieve what the clients want. But, now a doc review project’s genAI outputs can be trusted. Checking them is just a waste of time. Drafting – as long as the firm has clear precedents – is also made into something that simply needs a look-over by a senior lawyer. Legal research – as long as the questions sent to the junior lawyers are crystal clear – also doesn’t need much oversight…..if in a world of perfect outputs.

The junior associate’s job now is simply to trigger the system and reap the results, like a sales assistant scanning tins of baked beans at the checkout.

More Complex Work As Saviour?

We often say that it ‘will be fine because junior lawyers will just do more complex work’. How many times have we heard this…? But will they?

Will a law firm that takes on 100 matters per month, and of those the time needed reduces per matter by 50% because nearly all the routine ‘associate labour’ has been automated by super-reliable genAI tools, find loads of new work for them to do? Really? Will that actually happen?

An associate can be seen as a ‘role’ made up of many tasks. If 80% of those tasks are automated then you really don’t need so many associates. Plus, as noted, do those associates just magically take on more complex work? Some of the brightest ones do, for sure, but all of them?

And how far will the perfection go? In 10 years it doesn’t seem impossible for truly reliable negotiation tools to handle M&A deals, i.e. firm A and firm B both have genAI tools that ‘game out’ the wording of each and every clause, perhaps in a few seconds. We’d need senior lawyers there for sure – not just to steer things, and add in new aspects, but to take the blame.

And blame may be a lawyer’s saving grace. I.e. you can’t take a piece of software to court (if the law firm has brought it formally into its tech stack and vouches for it), but you can blame a lawyer who used it, as the final work product is the responsibility of the firm. Yep, blame, i.e. the need for someone to ‘eat the risk’, in this case an external law firm, is never going away.

Now, AL doesn’t buy into ‘the end of lawyers’ thesis, in part because of the blame point, and we’ll need lawyers to be managers and also to handle all the human inter-personal elements of the job – which are many. Plus, who will own the law firms…? Er….lawyers. So, from the outside much will appear to be the same.

But….that said, if perfection does arrive across a range of legal skills then the old adage about ‘AI is there to support lawyers, not replace them’ will be consigned to the trash heap of history.

And perhaps the idea that AI must always only support, not replace, is just a temporary thought process designed not to scare off potential buyers? Plus, for now it’s also a statement of fact. Any law firm that believed it was getting 99.9% accuracy (when it’s more like 80% on many tasks) on genAI outputs and not checking them before sending to clients would be out of business very quickly.

Conclusion

We may never get to 99.9% accuracy across all legal AI skills. But, equally, it’s not impossible that this happens in a decade or more/less. When that happens the traditional legal business model is gone. High leverage (i.e. having hundreds of very junior lawyers) is no longer a source of profit, it’s a terrible loss-maker.

But, it will all be dependent on output quality. Will we get there?

It’s not guaranteed. Maybe we do ‘top out’ in a couple of years on accuracy and not even agentic systems and better reasoning get us to ‘near perfection’. Maybe even one day when quantum computing and chips allow AI systems to perform exponentially more calculations, we still don’t get there.

But, equally, maybe one day we do. You need an M&A due diligence project done. Tap. There it is. 99.9% perfect. You need to redraft a 200-page contract. Tap. Done. Perfect. You need to research a case and draft something for court. Tap. Done. Perfect. That’s a very different world to the one we live in now.

So, to conclude. Ironically, genAI’s inaccuracy fits very nicely into the legal world, which is designed with errors in mind. That is why when people say ‘this is a game changer’ when a new AI tool is brought in, this site doesn’t believe it. Why? Because the game has not changed.

But, if accuracy goes up to near perfect, we don’t just gain huge efficiency increases, we truly change the legal business model forever. In short, if you really want to change the legal world, then reaching the highest accuracy level possible should be the goal.

Richard Tromans, Founder, Artificial Lawyer, April 2025

Legal Innovators California Conference, San Francisco, June 11 + 12

If you’re interested in the cutting edge of legal AI and innovation, then come along to Legal Innovators California, in San Francisco, June 11 and 12, where speakers from the leading law firms, inhouse teams, and tech companies will be sharing their insights and experiences as to what is really happening and where we are all heading.

We already have an incredible roster of companies to hear from. This includes: Legora, Harvey, StructureFlow, Ivo, Flatiron Law Group, PointOne, Centari, eBrevia, Legatics, Knowable, Draftwise, newcode.AI, Riskaway, SimpleClosure and more.

See you all there!

More information and tickets here.


#Achieve #Legal #Perfection #Artificial #Lawyer

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblocker Detected

Please Turn off Ad blocker