From the courtroom to AI governance — a lawyer's journey, structured as a book.

Tumblr ↗

Tag: chatgpt

  • Chapter 1: What is AI — The Only Technical Chapter You Need

    What AI is in legal terms — not technical terms

    Everybody is trying to define AI. Software engineers summarize it, with some irreverence, as “statistical pattern matching at scale.” Philosophers debate whether it thinks. Regulators argue about whether it decides. And lawyers — who need a definition they can actually use in court — are still catching up.

    In 2024, the European Union ended the ambiguity, at least for legal purposes. The AI Act, through Article 3(1), provided the definition that every lawyer working in this field must know:

    “A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

    This is now the legal baseline. Not a technical definition — a legal one. And like every good legal definition, each word carries weight.


    Breaking down the definition — phrase by phrase

    “Machine-based system”

    Legal translation: This establishes the subject of liability.

    This phrase does something deceptively simple — it tells you what kind of entity you are dealing with. Not a human being. Not a corporation. An artifact: hardware, software, or a combination of both. This matters enormously for liability, because all of our traditional legal frameworks — negligence, due process, product liability — were built around human actors or legal persons. AI fits neither category cleanly. By opening with “machine-based system,” the AI Act signals that we are in new legal territory.


    “Varying levels of autonomy”

    Legal translation: This addresses the delegation of agency.

    Autonomy is the litigation trigger. The moment a system acts without direct human intervention, the traditional frameworks of “master-servant” or “principal-agent” start to break down. Who is responsible when no human made the decision? The developer? The deployer? The user? This phrase defines precisely the gap where foreseeability and proximate cause become difficult to establish — and where your client’s case will be won or lost.


    “Adaptiveness after deployment”

    Legal translation: This addresses post-market monitoring and continuous liability.

    A toaster stays a toaster. An AI system can change its own behavior based on new data it encounters after it has been sold, deployed, or put into service. This single phrase shifts the entire legal burden from “was it safe when it left the factory?” to “is it safe throughout its entire lifecycle?” For contract lawyers, this means indemnity clauses must cover self-learning updates. For litigators, it means the system you are challenging in court today may not be the same system that harmed your client six months ago.


    “Infers… how to generate outputs”

    Legal translation: This is the regulatory filter — the jurisdictional line.

    This is the most important technical-legal distinction in the entire definition. If a program follows rules written by a human — “if X, then Y” — it is just software. The AI Act does not apply. But if a system infers its own logic from data — if it derives patterns that no human explicitly programmed — it is AI, and the full weight of the regulation applies. Before invoking the AI Act in any case, ask this question first: does this system infer, or does it just execute?


    “Predictions, content, recommendations, or decisions”

    Legal translation: The scope of harm.

    This list defines what AI actually produces — its legal outputs. Each type carries a different set of legal consequences:

    • Predictions and decisions raise issues of due process and discrimination — credit scoring, risk assessments, bail recommendations.
    • Content raises intellectual property and defamation issues — generated text, images, deepfakes.
    • Recommendations raise consumer protection and antitrust issues — algorithmic feeds, search results, pricing systems.

    Know which output type your case involves. The legal theory follows from there.


    “Influence physical or virtual environments”

    Legal translation: The causal link.

    This phrase does something legally significant: it confirms that virtual harm is real harm. A drop in your client’s credit score, a wrongful deportation order generated by an algorithm, a social media ban based on automated content moderation — these are legally equivalent to physical harm for the purposes of this regulation. The AI Act does not require a broken bone. It requires a real impact on a real environment, physical or digital.


    What the definition means for a lawyer

    Stripped of its technical language, Article 3(1) tells us this:

    AI is a non-human system that — through its own data-driven logic — operates with enough independence to impact the real world, creating unique challenges for traditional notions of fault, negligence, and oversight.

    We are dealing with something that is not a human being but acts with increasing autonomy. Something that learns, adapts, and produces outputs that affect people’s lives — their liberty, their immigration status, their criminal record — without anyone being able to fully explain how it reached its conclusions.

    The AI Act is the natural evolution of law. We moved from regulating human behavior, to corporate behavior, and now to the behavior of autonomous systems. Understanding this definition is not optional. It is the foundation of everything that follows in this book.