Algorithmic Bias
-

For nearly two years, companies across Europe and beyond treated 2 August 2026 as the make or break deadline for the EU AI Act’s high-risk obligations. That assumption is no longer stable. In the early hours of 7 May 2026, after a failed trilogue on 28 April, European Parliament and Council negotiators reached a provisional…
-

The preceding chapters have moved through the criminal justice and immigration contexts applying two parallel regulatory frameworks — the EU AI Act and the US constitutional and statutory structure — to each system encountered. This chapter steps back to map the architecture of those two frameworks directly against each other. The comparison is not academic:…
-

This chapter is a synthesis. Chapters 25 through 31 have analyzed each algorithmic system in depth — the legal frameworks that govern it, the constitutional arguments it generates, and the specific deficiencies that make its outputs contestable. This chapter assembles those tools into a practical sequence for the lawyer who walks into court knowing that…
-

The chapters in this module have documented how algorithmic systems enter criminal proceedings: COMPAS at sentencing, the PSA at bail, Clearview AI at identification, predictive policing in patrol, mass surveillance in the investigative data stream, and algorithmic outputs at the evidentiary threshold. Each of those systems has one feature in common: they all generate outputs…
-

The previous chapter examined how algorithmic outputs enter the courtroom and what legal standards govern their admissibility. This chapter steps back to the earlier stage — before the courtroom, before the investigation, before any identified suspect — to examine the legal architecture of mass surveillance: the collection and automated analysis of population-scale data streams in…
-

Every system examined in the previous four chapters — COMPAS at sentencing, the PSA at bail, Clearview AI at identification, predictive policing in patrol — eventually converges on the same procedural threshold. Before algorithmic output can influence what a jury decides, it must survive the courtroom. That moment is the procedural bottleneck of algorithmic criminal…
-

The previous three chapters examined algorithms that operate at defined procedural moments: COMPAS at sentencing, the PSA at bail, Clearview AI at the identification stage. Predictive policing systems operate earlier still — not at a point where a crime has been committed and a suspect identified, but before any of that, at the stage where…
-

The previous chapter examined how the PSA can displace individualized judicial judgment at the bail stage — a tool that is transparent by design but dangerous in practice because of automation bias. Clearview AI presents a structurally different threat. The PSA operates within the formal legal proceeding. Clearview operates before it — in the investigative…
-

The previous chapter examined COMPAS as the defining case of proprietary opacity in criminal justice AI: a tool whose internal methodology was protected as a trade secret, leaving defendants to challenge a score they could not examine. The Public Safety Assessment presents a different legal problem — one that is in some ways more revealing…
-

COMPAS has appeared in this book before — introduced in Chapter 2 as the case that made the blackbox problem impossible for courts to ignore, and examined in Chapter 14 as the central exhibit in the algorithmic discrimination debate. This chapter does not repeat that foundation. It builds on it, moving from the general to…