Chapter 16: The Legal Framework for AI in Immigration

In the previous chapters we built the general architecture: transparency, explanation, equality, due process, and the limits imposed by national security. Immigration is where all of those threads tighten at once.

This is not accidental.

Immigration law is one of the clearest examples of how states use data, risk scoring, identity verification, and surveillance to sort human beings before any courtroom appears. At the border, in visa screening, in asylum systems, and in detention decisions, the algorithm often arrives before the lawyer. That is why immigration is not a side chapter in AI governance. It is one of its most revealing laboratories.

The legal task in these cases is not simply to say that automated systems are unfair. That is too vague. The real task is to identify which legal framework applies, which right is being impaired, and which procedural hook allows the decision to be challenged. In both Europe and the United States, immigration-related AI sits at the intersection of administrative power, fundamental rights, and exceptional state discretion. The details matter.


I. The European Union: immigration AI is expressly high-risk

EU AI Act, Regulation (EU) 2024/1689, Annex III and Article 26

The AI Act does not treat immigration systems as peripheral. It places them near the center of the high-risk regime.

Annex III of the Regulation lists the categories of AI systems that are to be treated as high-risk. It expressly identifies AI systems intended to be used by or on behalf of competent public authorities — or by Union bodies working in migration, asylum and border control management — for purposes including risk assessment applied to persons arriving at borders, examination of applications for asylum, visa and residence permits, and detection, recognition, or identification of persons in the context of border management. The same Annex adds that the use of AI in migration, asylum and border control must not be used to circumvent obligations under refugee law.

That classification matters because once a system falls into the high-risk category, the AI Act’s full compliance architecture becomes relevant. Providers of high-risk AI systems must establish a quality management system under Article 17. Deployers must comply with obligations linked to use, oversight, and documentation, and — where applicable — carry out a fundamental-rights impact assessment before deployment. Article 13 requires high-risk systems to be sufficiently transparent for deployers to interpret outputs and use them appropriately. Article 14 requires effective human oversight, meaning a person capable of understanding and overseeing the system’s behavior, not simply a rubber-stamp reviewer whose function is nominal.

For immigration lawyers, this means something concrete: when an AI tool is used in visa processing, asylum assessment, or border control, the system is not operating in a legal void. It is operating inside a framework that recognizes in advance that these contexts are dangerous enough to justify heightened controls.


II. Fundamental-rights impact assessment and the public-sector problem

EU AI Act, Regulation (EU) 2024/1689, Article 27

One of the most important provisions of the AI Act for immigration practice is the fundamental-rights impact assessment mechanism.

Article 27 requires deployers of certain high-risk AI systems — especially public bodies, entities providing public services, and deployers in sensitive areas including migration and border control — to carry out a fundamental-rights impact assessment before putting the system into use. The assessment must identify the specific risks to the rights of individuals or groups likely to be affected and must identify measures to address those risks. It is to be completed before deployment and updated where relevant factors change.

This matters because immigration administration is exactly the kind of environment where rights are vulnerable to being reframed as operational variables. A state may describe a tool as merely helping to allocate attention or detect anomalies. But if the result is detention, denial of entry, refusal of protection, or intensified surveillance, then the effect is not technical. It is legal.

The impact-assessment logic is therefore a pre-litigation instrument. It forces the state to admit, at least on paper, that immigration AI can interfere with dignity, equality, privacy, and effective remedy. That admission can later become valuable in court or administrative review — not as a concession of liability, but as the baseline against which the adequacy of the deployed system’s safeguards can be measured.


III. GDPR transparency and the limits of automated immigration decisions

GDPR, Regulation (EU) 2016/679, Articles 13-14 and 22 Law Enforcement Directive 2016/680

The GDPR remains central wherever immigration-related systems process personal data outside the narrower law-enforcement framework.

Articles 13 and 14 require controllers to inform data subjects about the processing of their personal data. Where automated decision-making falling within Article 22 is involved, those provisions require meaningful information about the logic involved and the significance and foreseeable consequences of the processing. Article 22 itself provides that the data subject has the right not to be subject to a decision based solely on automated processing — including profiling — that produces legal effects or similarly significant effects on them.

For immigration law, that combination is potentially powerful. A refusal of visa status, a denial of entry, or a detention decision that materially affects liberty or legal status may well qualify as a decision with legal effects or similarly significant effects. If the process is solely automated, Article 22 is directly engaged and the right not to be subject to that decision applies. If it is not solely automated, the transparency provisions still matter because they help expose the role that automation played and provide the basis for challenging whether meaningful human review actually occurred.

But the European landscape in immigration is complicated by a key structural distinction. Where personal-data processing is carried out by competent authorities for the purposes of prevention, investigation, detection, or prosecution of criminal offences, or the prevention of threats to public security, Directive 2016/680 — the Law Enforcement Directive — applies instead of the GDPR. The individual rights under the Directive are narrower and the exceptions are broader.

That distinction matters because governments often move strategically between the categories of immigration control, public order, and security. Once a border-related system is reframed as operating for public-security or criminal-enforcement purposes, the legal route for transparency and rectification changes. The rights do not necessarily disappear, but the procedural path becomes more difficult — a dynamic we examined in depth in Chapter 15.


IV. The national-security pressure point in Europe

EU AI Act, Article 2(3) — Treaty on European Union, Article 4(2)

The AI Act excludes systems used exclusively for military, defence, or national-security purposes. In immigration practice, the danger is not always that a system is clearly military or intelligence-based. The danger is that ordinary administrative tools used at borders or in migration control may be reclassified as serving national-security objectives.

Once that reclassification occurs, the deployer may argue that the AI Act’s high-risk safeguards do not apply. And as the analysis in Chapter 15 showed, that argument is available because Article 2(3) is a real carve-out, not a theoretical one.

So in European immigration cases, one of the first legal questions is often classificatory: is this system really being used exclusively for national security, or is it fundamentally an administrative migration tool that should remain within the ordinary AI Act framework? That is not a semantic question. It determines whether the person affected can invoke the Act’s transparency, oversight, and impact-assessment structure — or whether they must rely instead on broader Charter and ECHR arguments, whose procedural channels are narrower and slower.


V. The United States: due process is the central hook

U.S. Constitution, Amendment V Zadvydas v. Davis, 533 U.S. 678 (2001)

In the United States, the immigration-AI framework is more fragmented and more reactive. There is no federal statute that classifies immigration AI as high-risk. Instead, the central constitutional hook remains the Due Process Clause of the Fifth Amendment, which prohibits the federal government from depriving any person of liberty without due process of law.

The Supreme Court has repeatedly confirmed that due process constrains immigration detention and removal processes, even in a field marked by broad sovereign discretion. The key case is Zadvydas v. Davis, 533 U.S. 678 (2001), decided June 28, 2001, by Justice Breyer for a 5-4 majority. The case arose from the indefinite post-removal detention of two resident aliens whose countries of origin refused to accept them. Breyer applied the canon of constitutional avoidance to read an implicit temporal limitation — a reasonable time, presumptively six months — into the post-removal-period detention statute. He did not need to strike down the statute on constitutional grounds directly. Instead, he read it in light of the Constitution’s demands, holding that a statute permitting indefinite civil detention would raise serious constitutional concerns, and that freedom from imprisonment lies at the heart of the liberty protected by the Due Process Clause.

That case matters here because it confirms something fundamental: immigration is not a constitutional vacuum. The government has wide authority over admission and removal, but liberty-depriving decisions still trigger due process protection. When an automated system influences detention, release, screening, or removal without meaningful explanation or individualized review, Zadvydas is one foundation from which to argue that indefinite algorithmic confinement — without adequate procedural safeguards — violates the Fifth Amendment.

The modern doctrinal framework for analyzing what those safeguards must include comes from Mathews v. Eldridge, examined in Chapter 12: the private interest affected, the risk of erroneous deprivation under existing procedures and the value of additional safeguards, and the government’s interest. That test maps directly onto immigration AI. If a system increases the risk of error — because it is opaque, because it uses flawed data, because it overflags certain populations, or because it is rubber-stamped without meaningful human review — then the argument for additional procedural safeguards becomes stronger. Notice, explanation, access to the underlying record, and meaningful human reconsideration are not technical luxuries. Under Mathews, in cases involving liberty, they can be constitutionally required.


VI. The plenary power doctrine is real, but not absolute

Chae Chan Ping v. United States, 130 U.S. 581 (1889) Administrative Procedure Act, 5 U.S.C. § 706

Any serious discussion of immigration law in the United States has to confront the plenary power doctrine. Chae Chan Ping v. United States, 130 U.S. 581 (1889), decided May 13, 1889 by Justice Field in a unanimous opinion, held that the power to exclude foreigners is an incident of national sovereignty entrusted to the political branches, and that congressional judgments on immigration exclusion are conclusive upon the judiciary. That foundational position has been reiterated across more than a century of immigration jurisprudence and continues to operate as a structural constraint on judicial review in immigration matters.

But it would be a mistake to read plenary power as placing algorithmic opacity, mass data processing, or unexplained automated detention tools beyond all constitutional review. Zadvydas itself shows that constitutional avoidance and due process concerns can impose limits even in immigration. The Administrative Procedure Act also remains relevant for many immigration-related agency decisions. Under 5 U.S.C. § 706, reviewing courts must hold unlawful and set aside agency action that is arbitrary, capricious, contrary to constitutional right, or taken without observance of procedure required by law.

That combination is powerful. If an immigration agency relies on an inscrutable automated process, does not disclose the basis of an adverse outcome, or cannot show that the decision was meaningfully reviewable, the challenge is not only constitutional. It can be administrative-law based: the action may be arbitrary, procedurally defective, or unsupported in the record. Plenary power constrains the scope of judicial review. It does not immunize unexplained agency action from the APA’s arbitrary-and-capricious standard.


VII. The transatlantic convergence: different systems, same core problem

At first glance, the EU and the United States seem to regulate immigration AI in completely different ways. Europe uses ex ante classification, transparency requirements, human oversight, and impact assessment for high-risk systems. The United States relies on due process, habeas, administrative review, and piecemeal constitutional litigation. One builds obligations in before the harm; the other responds after it.

But beneath those structural differences, the same legal anxiety is visible in both systems: the state cannot lawfully subject people to life-altering immigration decisions through processes that are opaque, unreviewable, and insulated from correction. In Europe, that anxiety is expressed through the AI Act, GDPR, the Charter, and the ECHR. In the United States, it emerges through due process, the APA, and constitutional resistance to arbitrary detention and unreasoned action.

That is the real framework for AI in immigration. Not a single code. Not a single statute. But a layered set of legal tools that allow lawyers to move from technological output to legal violation — in courts on both sides of the Atlantic — even without a dedicated AI law.


Next: Chapter 17 — AI in criminal sentencing and pretrial detention: bail algorithms, risk scores, and the automation of liberty.


Leave a Reply

Discover more from Guilty Algorithm

Subscribe now to keep reading and get access to the full archive.

Continue reading