Chapter 13: The Right to Explanation — Your Primary Weapon Against the Blackbox

Start with the most practical question a lawyer in this field faces. Your client has received a decision — a visa denial, a detention order, a rejected job application, a loan refusal. An algorithm was involved. You know it was involved because someone told you, or because you found evidence of it, or because the decision arrived with no reasoning at all and the pattern is familiar. Your client wants to challenge it. Where do you begin?

You begin with explanation. Not because it is the most elegant legal argument, but because without it you have nothing to work with. You cannot challenge data you cannot see. You cannot expose a proxy variable you do not know the system used. You cannot demonstrate bias in a process you cannot examine. The right to explanation is not a secondary procedural courtesy. It is the access point to everything else.

In the previous chapter we established due process as the constitutional principle that makes algorithmic decisions legally contestable in both systems. This chapter is about the instrument that makes contestation real in practice. Due process tells you that you have a right to challenge. The right to explanation gives you something to challenge with.


The European framework — three layers that work together

The European legal framework for explanation in automated decision-making is built in three layers that interact and reinforce each other. Understanding how they fit together — and where the gaps are — is the first step toward using them effectively.

The first layer is the GDPR. Article 22 gives data subjects the right not to be subject to a decision based solely on automated processing where that decision produces legal effects or similarly significant consequences. Where an exception applies that permits the automated decision, the controller must implement safeguards including the right to obtain human intervention, express a point of view, and contest the decision. Articles 13, 14, and 15 require controllers to provide meaningful information about the logic involved in automated processing and the significance and envisaged consequences of that processing for the data subject.

The phrase “meaningful information about the logic involved” has generated significant academic debate about whether it creates a genuine right to explanation or merely an obligation to provide general transparency. That debate was partially resolved by the Court of Justice of the European Union in Case C-203/22, Dun & Bradstreet Austria GmbH v. CK, decided in 2024. The CJEU distinguished between the abstract information required under Articles 13 and 14 — which covers categories of data and general logic — and the more specific, case-by-case explanation required under Article 15 when a data subject makes an access request. Under Article 15, the CJEU confirmed, the controller must provide information about the specific automated decision made about that individual — not just a description of how the system works in general. That confirmation matters for litigation. A controller who provides a generic description of their algorithm in response to a subject access request has not satisfied the GDPR’s Article 15 obligation. The individual is entitled to an explanation of the specific decision taken about them.

The second layer is the AI Act. Article 86 — titled “Right to explanation of individual decision-making” — introduces a targeted explanation right for individuals affected by decisions taken by deployers on the basis of high-risk AI systems listed in Annex III, where those decisions produce legal effects or significantly affect the person in a way they consider adverse to their health, safety, or fundamental rights. The right requires the deployer to provide clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.

Article 86 is both broader and narrower than GDPR Article 22 in ways that matter practically. It is broader because it applies to semi-automated decisions — situations where a human decision-maker relies on an AI output without the process being “solely” automated, which is the threshold that triggers Article 22. Many real-world deployments fall into this category: a human official who approves an algorithmic recommendation without meaningful review is technically making the final decision, but the process may not meet Article 22’s “solely automated” threshold while still falling within Article 86’s scope. It is narrower because, unlike Article 22, Article 86 does not attach accompanying rights to contest the outcome — it provides explanation, not a right to have the decision reconsidered on that basis. Article 86(3) also clarifies that the right applies only to the extent it is not otherwise provided for under Union law — meaning that where GDPR Article 22 already applies, Article 86 is subsidiary.

The third layer is the AI Act’s proactive transparency architecture — Articles 13 and 14 — which we examined in Chapters 5 and 12. These provisions require high-risk systems to be designed so that deployers can understand and correctly interpret their outputs, and that humans can exercise genuine oversight, detect anomalies, and avoid over-reliance. This is the layer that operates before any individual invokes their right to explanation. It requires that explanation be possible — that the system be built in a way that makes its outputs interpretable. A system that is technically compliant with Articles 13 and 14 has already created the documentation and transparency infrastructure that makes Article 86 requests answerable. A system that has not met those requirements is in breach regardless of whether anyone has yet invoked Article 86.


What explanation is not

Before using the right to explanation as a litigation tool, a lawyer needs to understand its limits as precisely as its scope.

The right to explanation under European law does not require disclosure of source code. It does not require that proprietary model architecture be revealed. The CJEU in Dun & Bradstreet acknowledged that explanation does not necessarily mean the automated system itself generates the explanation — what matters is that the person affected receives information sufficient to understand the basis for the decision and exercise their rights. Explanation is a legal standard, not a technical one. It is calibrated to what the affected person needs to understand and contest, not to what an engineer would need to replicate the system.

This distinction is useful in litigation. A controller who argues that explanation is impossible because the system is too complex to explain is misunderstanding the legal obligation. The question is not whether the model’s internal weights can be rendered in human-readable form. The question is whether enough information can be provided about the inputs used, the outputs generated, and the factors that drove the outcome to allow the person to understand why the decision went against them. In many cases, that information exists. It is held by the deployer. Article 86 creates a legal obligation to disclose it.


The United States — building explanation rights indirectly

The United States does not have a general federal equivalent to GDPR Article 22 or AI Act Article 86. There is no statutory right to explanation for automated decisions that applies across sectors. But the concept is not absent — it is built indirectly through due process, sector-specific statutes, discovery mechanisms, and litigation strategy.

The most structurally complete US equivalent operates in the credit sector. Under the Equal Credit Opportunity Act, 15 U.S.C. § 1691 et seq., and its implementing Regulation B, 12 CFR Part 1002 § 1002.9, creditors must provide specific reasons for adverse credit decisions. The Consumer Financial Protection Bureau has confirmed that a statement that an algorithmic model produced an adverse score is not a sufficient reason — creditors must identify the principal factors that the system used and that most significantly influenced the outcome. Applied to complex machine learning credit models, this creates an explanation obligation that functions similarly to what GDPR Article 15 requires: not a general description of the system, but an account of what drove this specific decision about this specific applicant.

Local Law 144 in New York provides a more limited transparency mechanism — not a right to explanation of the specific decision made about you, but a right to know that an automated tool was used and to access the published bias audit results for that tool. As we established in Chapter 10, those audit results are publicly available evidence about the system’s known behavior before it was applied to your client. They do not explain the individual decision, but they provide external evidence about whether the system was producing the kind of disparate impacts that may have contributed to it.

In constitutional litigation, the explanation requirement emerges from the second Mathews factor: when an individual cannot understand the basis for a government decision, their ability to contest it is structurally impaired, which means the risk of erroneous deprivation is irreducibly high. A government actor who relies on an algorithmic output they cannot explain — and provides that unexplained output as the basis for detention, deportation, or other liberty-affecting decisions — faces a strong procedural due process argument under Mathews v. Eldridge, 424 U.S. 319 (1976), that additional procedural safeguards are constitutionally required.

In criminal proceedings, Brady v. Maryland, 373 U.S. 83 (1963), provides a route to explanation through disclosure obligations. Information about how an algorithmic tool works, what error rates it produces, and what limitations affect its reliability in the specific context of use is potentially exculpatory material if it undermines the reliability of evidence used against the defendant. Motions seeking disclosure of algorithmic methodology under Brady have produced mixed results in courts — some have ordered disclosure, others have accepted trade secret objections. We will examine those cases in detail in Chapters 29 and 32.


Explanation as litigation strategy

For a lawyer, the right to explanation is a tactical instrument before it is a doctrinal argument. The goal is not explanation for its own sake. The goal is to force open the decision path so that the real legal arguments become possible.

Once you obtain meaningful explanation of an algorithmic decision, several distinct lines of challenge open. You can examine whether the system relied on incorrect or outdated data about your specific client — the rectification argument that connects to GDPR Article 16 and the Privacy Act of 1974 in the American context. You can test whether the human review of the automated recommendation was genuine rather than nominal — the Article 14 argument about whether the security guard was actually making a decision or just opening the gate. You can identify whether the system used proxy variables that correlate with protected characteristics — the bias argument that connects to EU Charter Article 21, Title VII, and the equal protection analysis of Chapter 14. You can assess whether the deployer understood what the system’s outputs meant and used them within their intended scope — a negligence or breach of duty argument available in both systems.

None of those arguments is available without explanation. Explanation is not the endpoint. It is the access point.

This is also why the right to explanation connects directly to rectification. You cannot correct an error in an algorithmic decision if you do not know what the algorithm used. Explanation reveals the inputs. Rectification corrects the ones that are wrong. Due process ensures a new decision is made on the corrected basis. The three rights are not separate tools — they are sequential steps in a single process.


The deeper point — explanation and accountability

At its foundation, the right to explanation is about the relationship between power and accountability. Law has always required those who exercise authority to justify their decisions. Judges give reasons. Administrative agencies give reasons. Creditors give reasons. The obligation to reason is what distinguishes legitimate authority from arbitrary power.

The rise of algorithmic decision-making creates pressure on that tradition because complexity can function as a shield against accountability. A system that cannot be explained cannot be reviewed. A decision that cannot be reviewed cannot be corrected. And a decision that cannot be corrected but produces serious consequences for the person it affects is not the exercise of legitimate authority — it is arbitrary power operating behind a technological facade.

The right to explanation is the legal answer to that facade. It is the instrument that prevents complexity from becoming an excuse. And in the hands of a lawyer who understands how to use it — how to make the request, what information to demand, how to connect the explanation to the substantive legal arguments that follow — it is one of the most powerful tools in AI governance practice.


Next: Chapter 14 — Algorithmic discrimination and equal protection. How AI inherits and amplifies historical bias.


Leave a Reply

Discover more from Guilty Algorithm

Subscribe now to keep reading and get access to the full archive.

Continue reading