Chapter 4: GDPR — The Cornerstone of Algorithmic Regulation

In the previous chapter we ended with a mention of GDPR Article 22 — the right not to be subject to solely automated decisions — as a law that existed before the AI Act but primarily empowered individuals after harm had already occurred. That observation was deliberate. Because before we can understand what the AI Act adds, we need to understand what was already there. And what was already there is more powerful than most lawyers working in AI governance realize.

When lawyers begin studying AI regulation, they usually start with the AI Act. That is understandable — it is the first European law designed specifically for AI systems. But legally speaking, that is not where the story begins. The real starting point is the General Data Protection Regulation. Long before legislators started speaking about artificial intelligence, the GDPR had already built a framework governing how personal data can be collected, processed, and used to make decisions about people. And because AI systems depend entirely on data, many of them were already regulated under the GDPR years before the AI Act was drafted.

A simple comparison clarifies the relationship between the two laws. The AI Act is the engineering manual of an aircraft — it regulates how the machine must be designed, tested, and deployed safely. The GDPR is the passenger’s bill of rights — it regulates what can be done to the person inside. One law governs the technology. The other protects the individual. Most modern AI systems operate under both frameworks simultaneously, and understanding that interaction is not optional for a lawyer working in this field.


The GDPR as a de facto AI law

Artificial intelligence systems learn from data. When those datasets contain information that identifies — or can identify — a natural person, that information becomes personal data under European law. At that moment, the entire system enters the scope of the GDPR.

This means that algorithms evaluating credit applications, automated hiring filters screening job candidates, fraud detection systems in banking, and risk-scoring tools used in immigration or law enforcement were already subject to strict legal obligations before any AI-specific legislation existed. The AI Act did not replace this framework. It added another regulatory layer on top of it — one focused on the system itself, while the GDPR remained focused on the individual affected by it.

For the practicing lawyer, this layering is operationally significant. A client harmed by an AI system in a hiring decision has two distinct legal frameworks to invoke: the AI Act’s obligations on the provider and deployer, and the GDPR’s individual rights against the data controller. They are not alternatives. They are cumulative. Knowing which rights attach under which framework — and in what sequence to invoke them — is one of the practical skills this book is designed to build.


Article 22 — the right not to be judged solely by a machine

Within the GDPR, the most important provision for algorithmic decision-making is Article 22. Its principle is deceptively simple: a person has the right not to be subject to a decision based solely on automated processing, including profiling, if that decision produces legal effects or similarly significant consequences.

In practice, this means a machine cannot be the sole judge in decisions that significantly affect someone’s life. The provision contains two elements that every lawyer working with algorithmic systems needs to internalize.

The first is the “solely automated” threshold. The prohibition applies when a decision is made without meaningful human involvement. This raises one of the most debated questions in European data protection law: what counts as meaningful? Most data protection authorities have converged on a consistent answer. If an employee receives an algorithmic recommendation and approves it without any genuine critical review — without the ability to question the output, investigate the underlying data, or change the outcome — that does not qualify as meaningful human involvement. The human role becomes purely mechanical. It is the equivalent of a security guard who opens a gate whenever a computer tells him to, without any authority or capacity to refuse. The guard is physically present. The machine is making the decision.

The second element is the “legal or similarly significant effects” threshold. Article 22 does not apply to every automated process. It applies when the decision carries serious consequences — denial of a loan, rejection in an automated hiring process, termination of social benefits, a risk score used in law enforcement or immigration control. In these contexts, the algorithm is no longer a tool. It is a decision-making authority. And that is precisely where the GDPR draws a legal boundary.

For lawyers who have read the previous chapters, the connection to Loomis is immediate. A COMPAS risk score influencing a criminal sentence is exactly the kind of decision Article 22 was designed to address. The fact that Loomis arose under US constitutional law, where no equivalent provision exists, is one of the clearest illustrations of the regulatory gap between the two systems — a gap we will examine in detail in the chapters on US law.


The right to explanation — what the GDPR actually provides

One of the most widely discussed concepts in AI regulation is the right to explanation. Interestingly, that exact phrase does not appear in the GDPR. What the regulation provides is a set of transparency obligations that together produce something functionally similar — but with important limits that lawyers need to understand precisely.

Articles 13, 14, and 15 require data controllers to inform individuals about the existence of automated decision-making, meaningful information about the logic involved, and the significance and expected consequences of that processing for the individual. This does not require companies to disclose source code or reveal proprietary methodology. But it does require that the reasoning behind the decision be explained in terms a person can understand and act upon.

The analogy from judicial practice is useful here. A judge is not required to disclose every thought that passed through their mind during deliberation. But the judge must explain the reasoning behind the decision in a form that the losing party can read, challenge, and appeal. The goal is not to expose the entire internal process. The goal is to make the decision reviewable. The same principle applies to algorithmic decisions under the GDPR — with the important qualification that the explainability the regulation requires is the explainability needed to exercise rights, not the full technical transparency that engineers might demand.

This distinction matters in practice. When you request an explanation of an automated decision on behalf of a client, the data controller cannot respond by saying the system is too complex to explain. They must provide enough information to allow your client to understand the basis for the decision and to contest it. How much is enough is context-dependent — but the obligation exists, and non-compliance is enforceable.


Sensitive data and algorithmic inference

The GDPR provides additional protection for categories of data considered particularly sensitive. Article 9 identifies racial or ethnic origin, political opinions, religious beliefs, genetic data, biometric identifiers, health information, and sexual orientation as special categories whose processing is generally prohibited unless specific legal conditions are met.

AI systems introduce a complication that Article 9 was not originally drafted to address but which data protection authorities have increasingly recognized: algorithmic inference. Modern machine learning systems are extremely effective at inferring sensitive information from data that appears, on its surface, to be entirely harmless.

Purchasing patterns can reveal health conditions. Browsing behavior can indicate political preferences. Facial recognition systems, by their nature, process biometric identifiers. The blackbox — whose opacity we established in Chapter 1 as an architectural feature rather than a correctable defect — makes this inference problem particularly acute, because the sensitive characteristic may never appear explicitly in the system’s inputs or outputs. It emerges from correlations that nobody designed and nobody can fully read.

Think of it as reconstructing a photograph from scattered puzzle pieces. Each piece individually appears harmless. Combined, they reveal something the subject never consented to disclose. Under the GDPR, if a system ultimately produces or relies on sensitive inferences about a person, the stricter protections of Article 9 may apply regardless of whether sensitive data was explicitly used as an input. This is why many AI systems require a Data Protection Impact Assessment before deployment — a mandatory analysis of the risks the system may create for individuals before those risks materialize.


AI in criminal law and immigration

A common misconception among lawyers entering this field is that data protection rules do not apply in areas such as policing or border control. The reality is more nuanced — and for criminal and immigration practitioners, more important.

Law enforcement authorities in the EU operate under a distinct instrument: Directive (EU) 2016/680, designed specifically for the processing of personal data in criminal investigations. The core principles, however, are substantively similar to the GDPR: proportionality, purpose limitation, and safeguards against discrimination. These principles become directly relevant when authorities use algorithmic tools for predictive policing, criminal risk assessment, or migration risk scoring.

When an algorithm classifies someone as high-risk, that classification triggers real consequences — more scrutiny, more questioning, in some cases denial of entry or liberty. For lawyers representing individuals affected by such systems, data protection rights are procedural tools with immediate practical value. A subject access request under Article 15 of the GDPR — or its equivalent under Directive 2016/680 — can produce the data used, the risk factors applied, and the logic behind the classification. That disclosure is often the first step in building a challenge to an unfair algorithmic decision. We will examine how this works in practice in the chapters on immigration AI systems, where tools like ImmigrationOS and the Hurricane Score have generated documented legal disputes.


Rectification — the right that breaks the algorithmic loop

One of the most underused rights in the GDPR is Article 16 — the right to rectification. It allows individuals to demand that inaccurate personal data be corrected without undue delay. On its face, this appears to be a simple administrative safeguard. In the context of AI systems, it becomes something more significant.

AI systems are products of their data. If the underlying data is wrong, the predictions will be wrong — not because the algorithm is malfunctioning, but because it is functioning exactly as designed on a flawed foundation. The GPS analogy is precise here: the most sophisticated navigation system in the world will route you to the wrong destination if the map is incorrect. The technology is not the problem. The data is.

Article 16 allows individuals to correct that map. Once the data used by the algorithm has been corrected, the person can request a new evaluation based on accurate information. A seemingly technical right becomes a mechanism for breaking the loop in which an erroneous data point generates a harmful prediction that generates further adverse decisions that compound the original error. In criminal and immigration contexts, where algorithmic classifications can accumulate and self-reinforce across multiple institutional interactions, this right is not administrative housekeeping. It is a substantive legal remedy.


What the GDPR established before the AI Act existed

The GDPR was not drafted as an AI law. It was drafted as a data protection law. But in practice, it became the first major European framework capable of constraining algorithmic power — because it recognized, before AI-specific regulation existed, that automated decisions affecting individuals require transparency, human oversight, and the ability to contest and correct.

Three principles established by the GDPR remain foundational even in the age of the AI Act. Individuals should not be judged solely by machines. Automated decisions must be explainable in terms sufficient to allow challenge. And the data feeding those systems must be accurate and correctable. The AI Act built on those principles and extended them — adding obligations on providers and deployers, creating high-risk categories, mandating conformity assessments and logging. But it did not replace the GDPR. It sits on top of it.

If the AI Act is the technical safety manual for intelligent systems, the GDPR remains the constitutional shield protecting the individual against them. For lawyers challenging algorithmic decisions in finance, employment, security, or immigration, Article 22 of the GDPR is still one of the most powerful tools available — precisely because it predates the AI Act, has an established body of regulatory guidance behind it, and creates individual rights that can be enforced today, in any EU jurisdiction, against any data controller using automated decision-making to affect someone’s life.


Leave a Reply

Discover more from Guilty Algorithm

Subscribe now to keep reading and get access to the full archive.

Continue reading