ai

  • Chapter 31: Rectification in Criminal Proceedings — Correcting Algorithmic Errors Before They Become Convictions

    The chapters in this module have documented how algorithmic systems enter criminal proceedings: COMPAS at sentencing, the PSA at bail, Clearview AI at identification, predictive policing in patrol, mass surveillance in the investigative data stream, and algorithmic outputs at the evidentiary threshold. Each of those systems has one feature in common: they all generate outputs…

    Read more →

  • Chapter 30: Mass Surveillance — From Telephone Tapping to Algorithmic Monitoring

    The previous chapter examined how algorithmic outputs enter the courtroom and what legal standards govern their admissibility. This chapter steps back to the earlier stage — before the courtroom, before the investigation, before any identified suspect — to examine the legal architecture of mass surveillance: the collection and automated analysis of population-scale data streams in…

    Read more →

  • Chapter 29: AI Evidence in Court — Admissible or Not?

    Every system examined in the previous four chapters — COMPAS at sentencing, the PSA at bail, Clearview AI at identification, predictive policing in patrol — eventually converges on the same procedural threshold. Before algorithmic output can influence what a jury decides, it must survive the courtroom. That moment is the procedural bottleneck of algorithmic criminal…

    Read more →

  • Chapter 28: Predictive Policing — Arresting People Before They Act

    The previous three chapters examined algorithms that operate at defined procedural moments: COMPAS at sentencing, the PSA at bail, Clearview AI at the identification stage. Predictive policing systems operate earlier still — not at a point where a crime has been committed and a suspect identified, but before any of that, at the stage where…

    Read more →

  • Chapter 27: Clearview AI — When Facial Recognition Points at the Wrong Person

    The previous chapter examined how the PSA can displace individualized judicial judgment at the bail stage — a tool that is transparent by design but dangerous in practice because of automation bias. Clearview AI presents a structurally different threat. The PSA operates within the formal legal proceeding. Clearview operates before it — in the investigative…

    Read more →

  • Chapter 26: PSA — The Algorithm That Shapes Bail

    The previous chapter examined COMPAS as the defining case of proprietary opacity in criminal justice AI: a tool whose internal methodology was protected as a trade secret, leaving defendants to challenge a score they could not examine. The Public Safety Assessment presents a different legal problem — one that is in some ways more revealing…

    Read more →

  • Chapter 25: COMPAS — The Algorithm That Sentences

    COMPAS has appeared in this book before — introduced in Chapter 2 as the case that made the blackbox problem impossible for courts to ignore, and examined in Chapter 14 as the central exhibit in the algorithmic discrimination debate. This chapter does not repeat that foundation. It builds on it, moving from the general to…

    Read more →

  • Chapter 24: The Legal Framework for AI in Criminal Justice

    The immigration modules showed how automated systems can shape a case before a judge fully sees it. In criminal justice, the architecture of harm is the same but the stakes escalate. A visa denial can be reconsidered. A detention recommendation can be revisited at a bond hearing. But when an algorithm enters bail, sentencing, parole,…

    Read more →

  • Chapter 23: Challenging an Algorithmic Immigration Decision — The Complete Framework

    The previous chapters mapped the systems: data fusion, detention classification, supervision scoring, text analytics, biometric matching. Each operates differently. Each creates a different evidentiary problem. But the legal challenge in each case follows the same analytical structure, because the legal defects — opacity, inaccuracy, absence of meaningful review, failure of individualized judgment — are the…

    Read more →

  • Chapter 22: Rectification Rights in Immigration — The Practical Framework

    The previous chapters examined the systems that shape immigration outcomes before a judge ever sees the case: data-fusion platforms, detention classifiers, supervision scores, text-analytics tools, and biometric matching systems. Different architectures, same structural vulnerability. In automated immigration systems, the database often becomes the first version of the factual record. If the data is wrong, every…

    Read more →