In my Digitalization of Law course at the Universidad Panamericana School of Law, I always begin with the same operational definition: an algorithm is an ordered sequence of operations that produces a predetermined kind of result. Not the exact result, but the type of result.
I then ask:
What is the objective of ChatGPT?
The answers are always the same: “produce high-quality text,” “answer questions,” sometimes — with an additional layer of anthropomorphism — “tell the truth.” All wrong. The actual objective is, plainly, to predict the next token. That is all.
There is no “understanding,” no “answering,” no “reasoning”: only a probability calculation over sequences of words. Everything else — coherence, usefulness, the appearance of intelligence — is what we project onto that mechanism.
What my students repeat, law firms repeat as well
The students are not at fault. They are reproducing the dominant marketing narrative around AI. The problem begins immediately afterwards: the same narrative circulates — in suits and ties — inside law firms and in-house legal departments. At that point, we are no longer dealing with students. We are dealing with the people who sign contracts with AI vendors, approve tools for their teams, and submit positions in regulatory consultations.
The standard response to “lawyers and AI” is to schedule training sessions. That response treats the issue as a skills gap. The actual issue is different: lawyers are signing, regulating, and litigating systems they cannot describe. That is no longer something a course resolves. It requires a governance decision — about who, within the organization, is allowed to operate without understanding what they sign.
What separates a team that negotiates from a team that only comments
The strategic gap reduces to a single question: can the legal function describe, in its own words, the technical architecture, the data flows, and the model objectives of the systems it signs, audits, or uses?
When the answer is yes, three concrete things follow:
- Contracts allocate risk on the basis of actual operational realities, rather than metaphors inherited from marketing material.
- The team contributes to drafting the regulatory rules, instead of inheriting rules drafted by other actors and limiting itself to commenting on them.
- Product and process opportunities emerge that remain invisible to anyone treating these tools as black boxes.
When the answer is no, the team is structurally reduced to the role of commentator. The rules of the game are written elsewhere — and then applied to the legal perimeter, almost always late and almost always at an asymmetric cost to the organization.
Demystification as a governance decision
Demystifying AI inside a legal team is therefore not a training budget line. It is a question about who, in the organization, is allowed to sign, audit, regulate, or professionally use systems they do not understand. Once the question is framed in those terms, the answer ceases to be obvious, and resource allocation moves out of the “internal training” drawer and onto the agenda of the executive committee or the managing partner.
That is the precise threshold at which a serious digital transformation of a law firm or legal department stops resembling a course and starts resembling an architectural decision. Departments that cross that threshold do not end up “mastering” AI; they recover the position of authority that the asymmetry was eroding, day after day, with every new tool signed without genuine understanding.
Before regulating, one must understand. Before understanding, the organization must decide, at its top level, that understanding is a priority.