When someone asks whether it is worth using ChatGPT to review or extract data from legal documents, the answer deserves more nuance than a yes or no. Here are my two cents after studying the question seriously.
1. ChatGPT is, first and foremost, an entertainment tool
Let us not forget: ChatGPT is, first and foremost, an entertainment tool — with some uses that can serve professional purposes, but which remain limited. That may sound provocative in 2024, but it is the most honest framing. ChatGPT is a connectionist and probabilistic tool, which allows it to surprise users — whereas the legal profession is built on reasoning, that is, on symbolic and often deterministic knowledge. We should therefore know when to use it and when it is inappropriate, or even dangerous. This first distinction is critical: treating it by default as a legal tool is already a mistake at the starting line.
2. Extracting critical data with ChatGPT is premature or reckless
Using ChatGPT to extract critical data from legal documents is either premature, reckless, or both. OpenAI’s chatbot technology — which includes neither advanced tools like Langchain nor more robust entity-extraction techniques — is not reliable enough to interpret the complexity of legal documents. These tasks require contextual understanding and precision that go well beyond text generation or pattern recognition. Which brings me to my next point.
3. Professional verification empties the promise of efficiency
Ultimately, the legal professional always has to verify — which significantly undermines the tool’s appeal, assuming the problem it was brought in to solve was actually a problem in the first place. The work still has to be put through the legal filter.
And if that were not the case, what is the real problem we are trying to solve? Freeing lawyers from these tasks? Because I can tell you that when a legal practitioner extracts information from a public document, that is not all they are doing. They are doing something fundamental: they are READING. And while reading, even quickly, they are simultaneously performing other tasks — sometimes entirely unconsciously — and can catch problems they were not looking for. If they are juniors, they are also being trained. I am therefore not convinced the problem is well-framed, nor that this is a clearly beneficial and risk-free use of AI.
4. Questioning companies that are not legal technology providers
It is essential to question whether companies that are NOT legal technology providers have the capacity to carry out this task reliably. And to measure the implications of trusting a tool that can neither guarantee accuracy nor explain its errors.
The right question before the tool
The digital transformation of a law firm does not begin by choosing the latest model, but by diagnosing which problem is genuinely worth automating — and under what conditions. I have developed this in more detail in “At some point, a law firm will truly understand how to leverage technology — and will change the game”: the real competitive differentiator will not be whoever adopts ChatGPT, but whoever knows how to distinguish when it is the right tool and when it is not.