The preceding chapters have moved through the criminal justice and immigration contexts applying two parallel regulatory frameworks — the EU AI Act and the US constitutional and statutory structure — to each system encountered. This chapter steps back to map the architecture of those two frameworks directly against each other. The comparison is not academic: for a lawyer whose client operates transnationally, or whose case involves a system built in one jurisdiction and deployed in another, the interaction between the two systems is a live compliance and litigation question.
The central thesis of this chapter is that the EU and US frameworks differ not primarily in what they regulate but in when and how they regulate it. Understanding that structural difference is the prerequisite for everything that follows in this module.
I. The philosophical divide: ex ante versus ex post
EU AI Act, Regulation (EU) 2024/1689, Article 5 and Annex III U.S. Constitution — Amendments IV, V, VI, and XIV
The European Union has adopted a prevention-first model. The EU AI Act classifies AI systems by risk category before they enter the market, prohibits the highest-risk categories outright, and requires compliance with detailed technical and governance obligations as a precondition for deployment of high-risk systems. The relevant analogy from product regulation is the pharmaceutical approval model: a system that has not demonstrated compliance cannot be deployed, regardless of whether it has yet caused harm.
The United States has no equivalent pre-market approval structure for AI systems. Regulation operates primarily through existing sector-specific statutes — Title VII of the Civil Rights Act, the Fair Housing Act, the Equal Credit Opportunity Act, the Americans with Disabilities Act — and constitutional frameworks that are invoked through litigation after harm has occurred. The Federal Trade Commission’s authority under Section 5 of the FTC Act, 15 U.S.C. § 45, to prohibit unfair or deceptive practices provides a general-purpose regulatory hook, but it is enforced case-by-case through agency action rather than through pre-deployment classification. The NIST AI Risk Management Framework, published in January 2023, provides voluntary technical guidance widely used by industry but carries no binding legal authority.
The practical consequence is not that one system protects better than the other in all cases. It is that they distribute risk and burden differently. Under the EU model, the provider and deployer of a high-risk AI system bear the compliance burden before the system is used, and the regulatory authority imposes liability for non-compliance without requiring proof of individual harm. Under the US model, the burden falls on the affected individual or class to demonstrate harm, navigate the relevant statutory framework, and litigate or settle. The EU system is structurally more burdensome for developers; the US system is structurally more costly for those harmed by systems that were deployed without adequate oversight.
For the criminal justice and immigration contexts this book addresses, that structural difference produces the pattern that has appeared throughout the preceding chapters: in the EU, a system like COMPAS or the PSA or Clearview AI faces pre-deployment regulatory classification and compliance obligations; in the US, its legal status is determined through constitutional litigation in the cases where it causes identifiable harm.
II. What the EU prohibits that the US generally permits
EU AI Act, Regulation (EU) 2024/1689, Article 5(1)(a)-(g)
Article 5(1) of Regulation (EU) 2024/1689 establishes six categories of AI practice that are prohibited. Each prohibition was analyzed in the specific system chapters; this section maps them to their US regulatory equivalent for comparative purposes.
Social scoring. Article 5(1)(c) prohibits AI systems used by public authorities to evaluate or classify individuals based on their social behavior or characteristics in ways that produce detrimental or disproportionate treatment across unrelated social contexts. No equivalent federal prohibition exists in the United States. Related concepts appear in the Fair Credit Reporting Act, 15 U.S.C. § 1681 et seq., which regulates the use of consumer reports in credit, employment, and housing decisions, and in state-level consumer protection frameworks, but the categorical prohibition that Article 5(1)(c) imposes on government social scoring has no US parallel.
Emotion recognition in workplaces and educational institutions. Article 5(1)(f) prohibits AI systems that infer emotions of natural persons in the context of workplaces or educational institutions, with limited exceptions for medical or safety purposes. These systems continue to be marketed and deployed in the United States without a federal categorical prohibition. The EEOC has issued informal guidance suggesting that AI hiring tools must comply with Title VII’s disparate impact framework, but that guidance does not create the categorical prohibition that Article 5(1)(f) establishes.
Untargeted facial image scraping. Article 5(1)(e), which became applicable on February 2, 2025, prohibits AI systems used for the untargeted scraping of facial images from the internet or CCTV footage to create or expand facial recognition databases. As established in Chapter 27, this is a direct legal repudiation of the Clearview AI model. The United States has no federal equivalent. The Illinois Biometric Information Privacy Act, 740 ILCS 14, provides the strongest state-level constraint, but it is limited to Illinois and its law enforcement application is time-limited under the 2022 ACLU settlement.
Predictive criminal profiling. Article 5(1)(d) prohibits AI systems used to assess the risk of a natural person committing a criminal offence based solely on profiling or personality traits, unless the system supports a human assessment already grounded in objective and verifiable facts directly linked to criminal activity. As established in Chapters 25 through 28, predictive policing, recidivism risk scoring, and bail risk tools continue to operate at the federal and state levels in the United States without a categorical prohibition, with legal challenges limited to constitutional litigation in specific cases.
Real-time remote biometric identification. Article 5(1)(d) as applied to biometric surveillance, combined with the specific prohibition in Article 5(1)(d) covering real-time identification in publicly accessible spaces, creates a presumptive prohibition on real-time facial recognition by law enforcement except in narrowly defined emergency circumstances subject to judicial or independent authorization. US law imposes no equivalent federal constraint. Law enforcement use of real-time facial recognition is governed by departmental policy, some state statutes, and constitutional doctrine that remains unsettled.
III. Where the two systems converge
NIST AI Risk Management Framework (January 2023) EU AI Act, Regulation (EU) 2024/1689, Articles 13 and 50
Despite the structural differences, several areas of genuine convergence have emerged since 2022.
Both systems increasingly address transparency for synthetic media. The EU AI Act’s Article 50 imposes transparency obligations on AI-generated content — deepfakes must be labeled, and AI-generated text of political relevance must be disclosed. In the United States, federal legislation on synthetic media disclosure remains limited to the Identifying Outputs of Generative Adversarial Networks Act provisions embedded in the National Defense Authorization Act, and to sector-specific electoral disclosure rules at the state level, but the direction of regulatory attention is parallel.
Both systems address algorithmic discrimination, though through different mechanisms. The EU AI Act’s Annex III classification of employment, credit, education, and law enforcement AI systems as high-risk, combined with the data governance and bias-testing obligations of Article 10, produces a pre-deployment discrimination prevention framework. The US framework produces a post-harm litigation framework through Title VII, the FHA, the ECOA, and FTC enforcement, but the substantive prohibition — that algorithmic systems must not produce discriminatory outcomes — is recognized in both.
Both systems emphasize technical robustness and accountability, again through different instruments. The NIST AI Risk Management Framework’s four functions — Govern, Map, Measure, Manage — overlap substantially with the AI Act’s Article 9 risk management, Article 10 data governance, Article 11 technical documentation, and Article 12 logging requirements. The NIST framework is voluntary; the AI Act obligations are binding. But the technical standards they reference are increasingly shared, particularly through ISO/IEC 42001:2023, which the blog examined in Chapter 5 as the common international management system framework.
IV. The enforcement gap: who actually imposes liability
EU AI Act, Regulation (EU) 2024/1689, Articles 70-99 FTC Act, 15 U.S.C. § 45
The enforcement architecture is where the practical difference between the two systems is most significant for lawyers advising deploying organizations.
Under the EU AI Act, Article 99 establishes a three-tier penalty structure. Violations of the Article 5 prohibitions — the highest-risk categories — carry administrative fines of up to €35 million or 7 percent of total worldwide annual turnover for the preceding financial year, whichever is higher. Violations of obligations applicable to high-risk system providers, deployers, and notified bodies carry fines of up to €15 million or 3 percent of worldwide annual turnover. Supplying incorrect, incomplete, or misleading information to competent authorities carries fines of up to €7.5 million or 1 percent of worldwide annual turnover. For SMEs and startups, the fines are subject to the same percentages or amounts but whichever is the lower of the two. Enforcement is conducted by national market surveillance authorities designated by each member state, coordinated by the European AI Office established at the Commission level. Critically, the high-risk AI system compliance obligations — Articles 9 through 15, covering risk management, data governance, transparency, human oversight, accuracy, and conformity assessment — do not become enforceable until August 2, 2026. Organizations deploying high-risk systems in the EU have a compliance window that closes in less than five months from the date this chapter was written.
In the United States, the enforcement architecture is fragmented by agency and by sector. The FTC has authority under Section 5 of the FTC Act to bring enforcement actions against unfair or deceptive AI practices — it has done so against algorithmic pricing collusion and deceptive automated systems, but the penalties imposed through consent orders are typically not structured as fixed multiples of turnover. The Department of Justice and state attorneys general can bring civil rights actions under existing anti-discrimination statutes. Private litigation through class actions under Title VII, the FHA, BIPA, and state privacy laws represents the most significant financial exposure for many US-deployed AI systems. The damages exposure from private class action litigation can rival EU regulatory fines for systems deployed at scale — the Clearview AI BIPA class action resolved with a 23 percent equity stake in the company rather than a cash settlement precisely because cash exposure would have been catastrophic — but the litigation timeline and outcome uncertainty are substantially greater.
V. The extraterritorial dimension
EU AI Act, Regulation (EU) 2024/1689, Article 2 GDPR, Regulation (EU) 2016/679, Article 3
The EU AI Act follows the GDPR’s extraterritorial model. Under Article 2, the regulation applies to providers placing AI systems on the EU market or putting them into service in the EU, regardless of where the provider is established, and to deployers located in the EU. It also applies to providers and deployers located outside the EU where the output of an AI system is used in the EU. That scope mirrors the GDPR’s Article 3(2) targeting principle, which has been the basis for enforcement against Clearview AI across multiple European jurisdictions despite Clearview’s argument that it has no EU establishment.
US law has no equivalent extraterritorial framework for AI regulation. Constitutional protections generally apply to persons within US jurisdiction or to US persons abroad in specific contexts. Statutory frameworks like BIPA apply within the relevant state’s jurisdiction. The result is that a system built and operated from the United States can be subject to EU AI Act obligations if it processes data of EU residents or produces outputs used in the EU — an exposure that many US-based AI developers have underestimated.
Next: Chapter 34 — The GPAI problem: regulating general-purpose AI models across both frameworks.

Leave a Reply