Mexico’s Semanario Judicial de la Federación (SJF) has published two legal precedents — known as tesis in the Mexican federal system — on the use of artificial intelligence in justice. The underlying dispute looks purely technical: in a case concerning a security bond, the executors filed a recurso de queja (a procedural complaint) to challenge the amount, arguing it had been set without objective parameters. But reading the two precedents reveals far more than a calculation: it shows how techno-enthusiast rhetoric is being absorbed into legal language, with no filters and no definitions.
What follows is a critical reading, precedent by precedent, argument by argument.
First precedent: four reasons that echo OpenAI’s marketing
The first precedent opens by calling AI an “innovative tool,” without ever defining what it means by “AI.” That is hardly surprising: the case in fact involves ChatGPT and similar models. The court then sets out four reasons that supposedly justify the use of “AI.” Let us take them one by one.
a) “It reduces human error in such calculations.”
I am still waiting to see an error that is not a “human” error. I am also waiting to see serious statistics showing that error rates fall when ChatGPT is used for numerical calculations, as opposed to an expert or specialized software. The premise is asserted with no empirical backing.
b) “It provides transparency and traceability by explaining step by step how the result is obtained.”
That depends on how it is used. Running a stochastic model with billions of parameters does not, by itself, provide more transparency or more traceability about the result. Mistaking a fluent natural-language output for a traceable explanation of the underlying reasoning is a fundamental technical error.
c) “It generates consistency and standardization in precedents and in setting the amounts of security bonds.”
Here the court conflates the tool with the process. AI does not generate consistency or standardization. What produces that result is the rigor with which the amounts of security bonds are determined. AI, in itself, is a heterogeneous set of tools. There is no guarantee whatsoever of consistency or standardization simply from “using AI.”
d) “It improves procedural efficiency, freeing time for substantive analysis of the case and facilitating the reasoning of judicial decisions.”
Techno-enthusiast rhetoric, asserted on no basis other than the ad nauseam repetition of OpenAI’s marketing.
Second precedent: “ethical and responsible use” as empty signifiers
The second precedent seeks to establish the minimum elements for the “ethical and responsible” use of AI, with a human-rights perspective. But what does ethical and responsible use actually mean?
- “Ethical” depends on the moral theory one adopts — Kantian, utilitarian, virtue ethics, and so on — and these can yield contradictory conclusions.
- “Responsible” means nothing unless one specifies to whom and with what consequences.
These are empty signifiers that contaminate legal language with corporate and soft-law rhetoric. Inscribing such terms in an isolated precedent, without clear technical definitions, amounts to importing the vocabulary of technology marketing into judicial reasoning.
The rhetorical wildcard: proportionality and innocuousness
One further sentence from this second precedent deserves attention:
“Proportionality and innocuousness require that AI tools be used only to the extent necessary and adequate to achieve a legitimate purpose, such as facilitating numerical reasoning without reaching legal reasoning in the interpretation or application of the rules.”
I will say no more.
What comes next
I will soon publish an analysis of the ChatGPT-based method actually used to calculate the bond, with conclusions grounded in evidence — not in intuitions or marketing.
For lawyers who want to engage critically with these tools instead of adopting them blindly, I also recommend our analysis of ChatGPT guides for lawyers and the risks they pose to professional practice. And if your firm or in-house legal team is weighing how to integrate these tools with the methodological rigor they require, the six service lines at legis.digital are designed for exactly that: automation, training, AI governance and strategic consulting with measurable KPIs.
Note: The cited precedents do not come from the SCJN (Mexico’s Supreme Court) but from Collegiate Circuit Courts (TCC), even though they are published on the SJF portal administered by the SCJN.