Knowing the legal definition of AI is the starting point. But to use it in practice — to read a system, challenge a decision, or advise a client — you need one more layer of vocabulary. Three terms in particular appear in every AI case, in every jurisdiction, at every stage of litigation: algorithm, machine learning, and blackbox.
Think of it this way. Early in my career I represented a plaintiff in a criminal case against a physician for malpractice. Before I could argue anything in court, I had to learn enough medicine to understand what had gone wrong and why. Without that, knowing the statute was useless. AI law works the same way. You cannot challenge what you cannot read.
An algorithm, stripped of its technical mystique, is simply a set of instructions. If traditional law is built on “if-then” logic — if a person steals, then a penalty follows — an algorithm is the digital version of that same structure. It encodes a policy decision made by a human designer and executes it at scale. For the litigator, this matters because the algorithm represents the intent of the designer. It is the private system’s statute. Challenging it means asking not whether the machine made an error, but whether the rules it was given were fair and lawful in the first place.
Machine learning introduces a fundamental change to that picture. Instead of a human writing every rule, the system infers its own rules from data. Feed it millions of criminal sentences, asylum decisions, or loan applications, and it will identify patterns and begin making predictions that no human explicitly programmed. The legal consequence is significant: liability no longer lives in a specific line of code. It lives in the quality and composition of the training data. The lawyer’s duty of care shifts from the programmer’s hand to the provenance of the dataset.
Both of these concepts lead to the same place: the blackbox.
A blackbox is an AI system whose internal decision-making process cannot be explained in human-readable terms. You can see what goes in and what comes out. But the process in between — the millions of mathematical weights and correlations that connect input to output — is opaque. Not always deliberately. Often simply because the complexity exceeds human comprehension.
NIST, in Section 3.5 of its AI Risk Management Framework, gives lawyers the clearest vocabulary for this problem. It separates three concepts that courts and practitioners tend to conflate. Transparency answers what happened — the system ran and produced an output. Explainability answers how — what internal mechanism connected the input to the result. Interpretability answers why it matters — what that result means in the context of the specific decision being made.
The distinction is not theoretical. Only interpretability satisfies the right to a reasoned decision. Proving a system ran is not enough. Auditing its internal weights is not enough. Your client needs to know why they were denied bail, flagged for deportation, or classified as high-risk. That answer requires interpretability — and that is precisely what most blackbox systems cannot provide.
The EU AI Act confronts this honestly. Rather than demanding explainability where the technology cannot deliver it, it mandates something the law can actually enforce: traceability. Article 12 requires high-risk AI systems to automatically record logs throughout their operational lifetime — inputs received, outputs produced, databases consulted, human verifiers involved. Think of it as the flight recorder of a commercial aircraft. We may not understand the physics of what happened, but the record allows the judiciary to reconstruct the sequence of events after the fact.
For the practitioner, those logs are primary evidence. They are what you request in discovery. Article 47 adds a second instrument: the EU Declaration of Conformity, a formal document the provider must sign affirming the system meets the Act’s safety requirements. It functions as an affidavit of compliance. When a system fails, the gap between what was declared and what actually occurred is where your case is built.
The principle underlying both provisions is the same: where explainability is technically impossible, traceability becomes the legal standard. A blackbox is no longer a shield. Under the AI Act, opacity is a breach of duty.
Leave a comment