Chapter 3: The Ratio Legis — The Harm That Came Before the Law

Every major technological revolution follows the same legal trajectory. First comes innovation. Then mass adoption. Then harm. Only after the harm becomes undeniable does regulation arrive.

Artificial intelligence followed that pattern with precision. Before the EU AI Act entered into force — and before governance frameworks such as the NIST AI Risk Management Framework were available as a baseline for risk controls — algorithmic systems were already embedded in some of the most sensitive domains of public life: courts, policing, immigration enforcement, hiring, credit scoring. They were deployed under a persuasive narrative: mathematical systems are objective, neutral, and efficient.

In practice, many deployments proved something different. A system can be mathematically consistent and still be legally illegitimate — because it can produce consequential outcomes without transparency, contestability, or accountable human oversight.

Think of it like a car without brakes. The engine works perfectly. The car moves. The problem only becomes visible when it needs to stop.

The cases in this chapter are not speculative. They are documented incidents, investigated through journalism, litigation, and public records. Some ended in court decisions. Others in settlements or policy changes. Together they explain the ratio legis — the reason behind the law. Not what the law says, but why it was written the way it was.


Predictive justice and the illusion of neutrality

The COMPAS controversy has already appeared in this book, and it will appear again. Not because it is the only case that matters — but because it remains the clearest public demonstration of what happens when a predictive model enters criminal sentencing without safeguards designed for opacity.

In 2016, ProPublica published Machine Bias, a data-driven investigation of COMPAS risk scores. The report concluded that the system produced materially different error patterns across racial groups — higher false positive rates for Black defendants who did not go on to reoffend, and higher false negative patterns for white defendants who did. Whatever one thinks of the statistical debates that followed, the legal consequence is straightforward: a risk score is not neutral simply because race is not an explicit input. Models can produce disparate outcomes through variables that correlate with protected characteristics or reflect historically unequal enforcement patterns.

The analogy that helps here is a mirror in a funhouse. The mirror does not distort reality on purpose. It simply reflects the shape it was given. If the shape was already distorted — by decades of racially unequal policing and prosecution — the mirror will faithfully reproduce that distortion and present it as an accurate image. COMPAS did not invent racial disparity in the criminal justice system. It automated it, accelerated it, and gave it the authority of a number.

This is where Loomis becomes more than a criminal case. It becomes a governance template — and a warning. Because Loomis did not answer the most important question: if evidence influencing a sentence is produced by a system that generates disparate error rates, and that system is protected as a trade secret, how does a defendant challenge the result in any meaningful way?

Modern governance frameworks are built as a direct response to that gap. Under the EU AI Act, high-risk systems are subject to obligations aimed precisely at preventing discoveries after deployment. Article 9 requires providers to identify and mitigate foreseeable risks to fundamental rights across the system lifecycle. Article 10 requires data governance practices intended to reduce errors and address bias risks in training, validation, and testing data — the funhouse mirror problem addressed at the source. Article 12 mandates logging to enable traceability. Articles 13 and 14 require transparency and human oversight designed to allow correct interpretation of outputs before they become sentences.

NIST reaches the same conclusion through a different instrument: its Section 3.7 frames bias and fairness risks as governance risks that must be mapped, measured, and managed — not explained away after harm occurs.

The point is not that regulation guarantees fairness. The point is that the legal system learned, the hard way, that fairness cannot be outsourced to the mathematics.


Facial recognition and the erosion of liberty

While predictive sentencing tools were raising concerns in courtrooms, facial recognition technology was expanding quietly through law enforcement. In theory, it increases investigative efficiency. In practice, the documented failures share a recurring pattern: a probabilistic match is treated as identification, and the system’s uncertainty is converted into state power.

Think of facial recognition as a witness who has seen millions of faces but cannot be cross-examined, cannot be asked to explain their reasoning, and cannot be held accountable for being wrong. When that witness points at someone and says “that’s the person,” the courtroom treats it as evidence. The problem is that this witness has a documented tendency to be wrong more often when the face belongs to a darker-skinned person.

In January 2020, Robert Williams was arrested in Detroit after police relied on a facial recognition match generated from a poor-quality surveillance still. He was handcuffed in his driveway in front of his wife and two young daughters — ages two and five. He was detained for approximately thirty hours before police acknowledged the identification was wrong. The system had returned him as the ninth-best match. Detectives did not investigate his whereabouts before proceeding to arrest.

Williams sued the Detroit Police Department. In 2024, the case settled for $300,000 and was accompanied by policy changes: facial recognition results alone cannot be used as the basis for an arrest warrant, and police cannot proceed directly from a facial recognition result to a photo lineup. An audit of all cases since 2017 in which facial recognition was used to obtain an arrest warrant was ordered.

Williams was not the last. Detroit police subsequently wrongfully arrested Porcha Woodruff — eight months pregnant — in a carjacking case based on a facial recognition hit. Across the United States, at least seven documented cases of wrongful arrest following facial recognition matches have been publicly recorded.

The legal significance of these cases is not limited to individual cities. They illustrate the core governance problem: facial recognition systems produce probabilistic outputs that are vulnerable to dataset limitations and differential error rates — and when those outputs are treated as dispositive, they can generate wrongful deprivation of liberty at scale.

The EU AI Act classifies real-time remote biometric identification by law enforcement in publicly accessible spaces as a prohibited practice under Article 5(1)(h), subject to narrow exceptions under Article 5(2). It also prohibits untargeted scraping practices used to build facial image databases under Article 5(1)(d). For biometric systems that fall outside the prohibition but remain high-risk, Article 13 requires transparency and Article 14 mandates human oversight capable of detecting errors before they become arrests. The regulatory judgment is not theoretical. It is a reaction to documented reality: the harm to liberty and privacy can outweigh the marginal gains in investigative efficiency.


Algorithmic detention and immigration enforcement

Algorithmic decision-making also entered immigration enforcement — with consequences that received less public attention but raise equally serious legal questions.

In the United States, ICE adopted tools designed to support custody and bond determinations. The stated rationale was standardization and risk-informed decision-making. Investigative reporting and litigation-related disclosures later suggested that the system’s configuration pushed recommendations toward detention outcomes at extremely high rates — effectively converting a discretionary administrative process into a mechanized presumption of detention.

The analogy is a scale that has been pre-loaded on one side. The scale still moves. The process still looks like weighing. But the outcome was determined before the measurement began.

If that is the function of the system, the legal problem is not merely opacity. It is institutional design. An algorithmic interface can be used to mask policy choices as technical outputs and reduce accountability for individual decisions affecting liberty. For the affected individual, the harm is intensified by the procedural asymmetry: a person is detained based on a mechanism they cannot inspect, contest, or meaningfully audit. Under the Fifth Amendment’s due process clause, federal courts have held that individuals subject to government decisions affecting liberty must receive meaningful notice and an opportunity to be heard — a guarantee that is effectively nullified when the basis for detention is an algorithm whose logic cannot be examined.

This is precisely why the EU AI Act treats such systems as high-risk and requires human oversight that is not symbolic. Article 14 is drafted around operational capabilities: the overseer must be able to interpret outputs, detect anomalies, and intervene. A system designed to produce predetermined results is the opposite of oversight-compatible design.


IBorderCtrl — when experimentation meets fundamental rights

The iBorderCtrl project attempted to detect deception at EU border crossings through analysis of travelers’ facial micro-expressions. Funded under the EU’s Horizon 2020 research program and piloted in Hungary, Latvia, and Greece between 2018 and 2019, the system asked travelers to answer questions while a camera recorded subtle facial movements. An algorithm would then estimate whether the individual was being truthful.

The analogy that best captures the legal problem is a lie detector test administered at passport control. We banned polygraphs from courtrooms decades ago because the science is contested and the consequences of error are too serious. IBorderCtrl attempted to automate the same discredited premise — this time at scale, without consent, and with direct consequences for a traveler’s ability to enter a country.

The scientific foundation was contested from the outset. Reliable deception detection through micro-expression analysis is not supported by scientific consensus. A substantial body of psychological research questions whether universal micro-expression indicators of deception exist and whether automated systems can detect them with any meaningful accuracy. The project’s own reported accuracy rate of approximately 76 percent means roughly one in four travelers would be incorrectly flagged — a misclassification rate that would be unacceptable in any legal context where the output triggers state scrutiny.

Under the EU AI Act as enacted, border control systems used to assess the reliability of evidence or credibility fall within the high-risk category defined in Annex III, point 7. That triggers the full compliance architecture: Articles 9 through 15 — risk management, data governance, technical documentation, logging, transparency, and human oversight. Article 13(1) requires that the system be designed to enable deployers to interpret its outputs correctly — a requirement that a binary truthful/deceptive classification would struggle to satisfy. Under EU Charter Article 47 — the right to effective judicial remedy — and Article 41 — the right to good administration — the absence of meaningful recourse for a traveler incorrectly flagged by the system raises fundamental rights concerns that the project never adequately addressed.

The deeper lesson is institutional: in sensitive domains, experimentation itself becomes a fundamental rights issue when systems are deployed before accountability structures exist.


Why these cases matter for regulation

These cases share a common architecture. A system is deployed to improve efficiency. Oversight is minimal or absent. Opacity is treated as acceptable. Harm occurs. Only then do institutions react.

The pattern is not unique to AI. It mirrors the history of pharmaceutical regulation after thalidomide, of financial regulation after the 2008 crisis, of aviation safety after documented accidents. Regulation does not anticipate harm in a vacuum. It responds to harm that has already been paid for — by people who had no choice but to be the first victims.

The EU AI Act’s prohibitions, high-risk obligations, and governance requirements did not appear as pre-emptive theory. They are responses to a documented pattern: real people were harmed by systems that operated without the procedural guarantees the rule of law requires. GDPR Article 22 existed before the AI Act but primarily empowers individuals after the fact. The AI Act changes the logic: it imposes affirmative obligations on providers and deployers before harm occurs, converting rights-based litigation into compliance-based prevention.

That is the ratio legis. And it matters for how you use these laws in practice. Every obligation in the AI Act is traceable to a category of harm the legislator was trying to prevent from recurring. When you invoke Article 10 in a bias case, or Article 14 in a wrongful detention case, or Article 5 in a facial recognition case, you are not applying abstract regulation. You are applying the legal system’s response to something that already happened — and that the law decided cannot be allowed to happen again.


Leave a Reply

Discover more from Guilty Algorithm

Subscribe now to keep reading and get access to the full archive.

Continue reading