The Illusion of Privacy: What a Federal Courtroom Just Told Us About AI and Secrecy

A 2026 federal court ruling in United States v. Heppner clarified that using public AI tools for legal strategy lacks attorney-client privilege and confidentiality protections, as AI is not a lawyer and communications with it are not private. This case highlights the critical legal distinction between feeling private and actual legal privacy, urging caution for individuals and corporations using consumer AI in sensitive legal matters. Enterprise AI tools differ but do not guarantee privilege.
A person using a laptop with a virtual assistant on screen, surrounded by legal symbols, a gavel, scales of justice, and a document labeled attorney-client privilege.
Contents

There is a moment, familiar to almost anyone who has used a modern AI system, where something quietly shifts. You stop treating the conversation like a search engine query. You start treating it like a confession.

You paste in the messy facts. You test the weak spots in your argument. You ask the question you are too embarrassed to ask a real person. The interface is clean, the responses are calm, and nothing you type feels like it is going anywhere. It feels, unmistakably, private.

That feeling is one of the most powerful and most dangerous illusions in modern technology.

A federal judge in New York recently made that danger explicit — not as a warning about hacking or data breaches, but as a ruling about the law itself. And if you read the decision carefully, the real shock is not what the court said about AI. It is what the court revealed about the gap between how people experience these tools and what those tools actually are.


The Case That Changed the Conversation

In early 2026, Judge Jed Rakoff of the Southern District of New York issued a decision in United States v. Heppner that may become one of the most-cited cases of this decade — not because it broke dramatic new legal ground, but because it applied old ground to a world that had been quietly pretending the rules did not apply.

Here is what happened. Bradley Heppner, aware that he was the target of a federal criminal investigation, turned to Anthropic’s publicly available Claude AI system. He used it to build out a defense strategy — possible arguments, factual angles, legal theories. He then shared those AI-generated materials with his lawyers. When federal agents seized his devices, the government sought access to those materials. Heppner’s team argued they were protected: either by attorney-client privilege, the legal shield that keeps communications between a client and lawyer confidential, or by the work-product doctrine, the rule that protects a lawyer’s internal strategy from being handed to the opposing side.

Judge Rakoff disagreed on both counts. The materials were not privileged. They were not protected work product. They could be read by prosecutors.

The ruling was stark. But its logic was, in a sense, almost inevitable — once you understand what privilege actually is and what AI actually does.


What Privilege Is Really Protecting

To understand why this matters, you have to understand what attorney-client privilege is actually for.

It is not a technicality. It is not a loophole. It is a foundational idea in the American legal system: that people can only be honest with their lawyers if they are certain that honesty will not be used against them. The moment a client fears their candid admissions might end up in a prosecutor’s hands, they stop being candid. And the moment they stop being candid, the legal system loses something essential — the ability to produce just outcomes built on truthful, fully-informed legal counsel.

So the law creates a protected space. What you say to your lawyer, for the purpose of getting legal advice, stays between you and your lawyer. That space has boundaries. It requires a real attorney. It requires actual confidentiality. It requires that the communication not be voluntarily shared with outside parties. Violate any of those conditions, and the protection collapses.

Now ask yourself: when Heppner typed his defense strategy into Claude, did he meet those conditions?

Not even close.


Three Reasons the Privilege Vanished

Judge Rakoff’s opinion identified three compounding failures, each one significant on its own, devastating in combination.

First, Claude is not a lawyer. This sounds obvious. It is also legally decisive. Attorney-client privilege only protects communications with an actual attorney. Not a consultant. Not a friend who went to law school. Not a sophisticated AI model that reasons fluently about legal concepts. The relationship that creates privilege — the formal, professional, regulated relationship between a licensed attorney and a client seeking legal advice — simply does not exist between a user and an AI platform.

Second, the communications were not confidential. Here is where the story gets sharper. The court examined Anthropic’s own privacy policy and found that users had been explicitly put on notice: their inputs and outputs could be collected, used for training, and disclosed to third parties, including, under certain circumstances, governmental authorities. In legal terms, Heppner had no reasonable expectation of confidentiality in what he typed into Claude. The terms of the service he agreed to made that clear, whether or not he read them carefully.

This is the part that should make every regular AI user pause. The feeling of privacy is not the legal fact of privacy. A journal locked in your desk creates confidentiality. A journal typed into a publicly-accessible platform with a data retention policy does not.

Third, Heppner used Claude without his lawyers’ direction. This may seem like a minor procedural detail. It is not. The court acknowledged that there is at least a theoretical argument — though it did not rule on it — that if an attorney had directed a client to use an AI tool as part of the legal workflow, the tool might function more like a professional agent assisting the lawyer. Courts have sometimes extended privilege to third parties who are genuinely necessary to the legal representation, under a doctrine from a case called Kovel. But that was not this situation. Heppner acted independently, on his own initiative. The AI he consulted expressly disclaimed giving legal advice. And so the question became: did he seek legal advice from Claude? The answer was obviously yes — and obviously, legally, that could not create privilege.

Then came the final blow. The court noted that a document which is not privileged when it is created cannot become privileged by later forwarding it to a lawyer. Sending the outputs to his attorneys did not retroactively sanitize the process. Whatever Heppner was doing when he prompted Claude, he was not yet inside the protected attorney-client relationship. Emailing the results to his legal team afterward did not change that.


The Work-Product Doctrine Falls Too

The work-product doctrine is a close cousin to attorney-client privilege, but it operates differently. It protects materials that lawyers — or people working for lawyers — create in anticipation of litigation. Its deepest purpose is to guard a lawyer’s mental processes, strategic judgments, and legal theories from being weaponized by the opposing side. Courts have long recognized that forcing attorneys to hand over their private strategic thinking would fundamentally distort the adversarial system.

But here, too, the argument failed. Heppner’s own attorneys conceded that he had created the Claude materials independently, without their direction. Those materials did not reflect counsel’s strategy at the time they were created — they reflected Heppner’s own thinking, which may have later influenced counsel, but that is a very different thing.

The work-product doctrine protects the lawyer’s mind. It has never been designed to protect a client’s independent research project that later gets funneled upward to the legal team. Judge Rakoff placed the AI documents firmly in that latter category.


The Deeper Insight: This Is Not an AI Exception

One of the most intellectually honest parts of the Heppner opinion is what it refuses to claim.

Judge Rakoff does not announce a new anti-AI rule. He does not say that generative AI is uniquely dangerous or legally toxic. He says something more fundamental and, in some ways, more unsettling: the novelty of the tool does not exempt it from established legal doctrine. The law of privilege and work product did not need to evolve to reach this result. The existing framework, applied honestly, got there on its own.

That is why this case has such wide implications. It is not a warning about AI specifically. It is a warning about a mental model — the intuition that typing something into a clean, responsive interface creates a kind of private cognitive space that the outside world cannot access. That intuition is deeply human. It is also, in many legal contexts, completely wrong.

Think of it this way. If Heppner had written his defense strategy on a whiteboard in a coffee shop, he would not expect privilege to attach. If he had dictated it into a voice recorder and then mailed the tape to a transcription service with no confidentiality agreement, no court would call that privileged. The fact that he did it in a chat interface that felt intimate and responsive does not change the underlying structure of what he did: he shared sensitive legal analysis with a third-party platform he did not control, under terms that explicitly preserved that platform’s right to retain and disclose his data.

Seen that way, the result is not surprising. It is almost obvious.

What is surprising is how many people — including, apparently, a criminal defendant under active federal investigation — did not see it coming.


Why Corporations Should Be Paying Very Close Attention

Criminal cases have a certain dramatic clarity. But the long-term consequences of Heppner will likely be felt most powerfully in corporate legal departments and compliance functions, not in criminal courtrooms.

Consider what happens in a normal large organization on a normal Tuesday. Employees summarize ongoing disputes in AI tools to make sense of the facts. Analysts paste confidential memos into chatbots to get cleaner drafts. Managers pressure-test their legal exposure by asking AI systems to play devil’s advocate. Teams share sensitive investigative findings through consumer AI platforms to generate quick summaries before a call with outside counsel.

None of that is malicious. Much of it is genuinely useful. But once a court has articulated — clearly, in writing — that a public AI platform may function as a third party for confidentiality purposes, every one of those interactions becomes a potential liability. Not because the AI is untrustworthy in some personal sense, but because the legal infrastructure surrounding public AI platforms is simply not the same as the legal infrastructure surrounding a confidential attorney-client relationship.

The questions that compliance and legal teams now need to answer are uncomfortable but unavoidable:

Are employees using consumer AI tools to process information that is subject to litigation holds or privilege protections? Are draft legal theories being developed in systems whose terms do not guarantee confidentiality? Is there a clear, enforceable distinction between tools that employees can use freely and tools that require attorney supervision? And critically — are employees being trained to understand the difference between feeling private and being legally protected?

These are not philosophical questions. In the wake of Heppner, they are discovery questions.


The Enterprise AI Nuance — Important, But Not a Safety Net

Here is where the picture gets more textured, and where some of the early commentary on this case has oversimplified.

Not all AI is equal under the law. Anthropic has publicly distinguished its consumer product from its enterprise and API offerings, noting that its commercial terms — governing Claude for Work and API deployments — operate under different data policies than the consumer-facing product at issue in Heppner. OpenAI has made similar distinctions, stating that data from enterprise and API environments is not used for model training by default. Those differences are real and legally meaningful.

If a company operates Claude through a private enterprise deployment with robust confidentiality terms, data isolation, and clear attorney supervision of the workflow, the analysis may look very different from what happened in Heppner. That distinction could matter significantly in a privilege dispute.

But this is where precision matters most: enterprise does not automatically mean privileged. Privilege is not a feature you purchase with a software subscription. It is a legal conclusion that courts reach after examining the full constellation of facts — the nature of the communication, the existence of an attorney-client relationship, the confidentiality of the channel, the purpose of the communication, and whether the material reflects attorney strategy or independent client action. Enterprise tools may reduce risk by creating a more defensible confidentiality structure. They do not guarantee privilege. No tool does.

The law here is genuinely unsettled. Heppner answered some questions. It left many others open. Anyone who tells you that enterprise AI is automatically safe for privileged work is telling you something the courts have not yet confirmed.


The Cultural Problem Underneath the Legal One

Strip away the legal doctrine for a moment and look at what this case is really about.

It is about a fundamental mismatch between experience and reality — between the way a technology feels and what it actually is. AI systems are extraordinarily good at creating the sensation of a private, intimate, responsive relationship. They mirror your reasoning back to you. They engage with your specific situation. They feel, in a way that is genuinely unusual for technology, like they are with you rather than processing you.

That experience is powerful. It is also, from a legal standpoint, an illusion.

When you use a public AI system to explore sensitive legal strategy, you are not entering a private cognitive space. You are externalizing your most sensitive thinking into infrastructure controlled by a third party, governed by terms you agreed to but probably did not read, subject to retention and disclosure policies you may not fully understand. The interface is intimate. The underlying reality is not.

This does not mean AI is bad or that people should stop using it. It means something more targeted and more actionable: stop using public AI as if it were a privileged relationship. The tool can be extraordinary. The relationship it creates is not protected.


What the Ruling Does Not Say — And Why That Matters

It is worth being careful here, because Heppner is a single district court decision, not a Supreme Court ruling. It does not establish that all AI use in legal contexts destroys privilege. It does not settle the question of enterprise deployments. It does not foreclose the possibility that attorney-directed AI use might, in the right circumstances, be treated differently.

The opinion actually leaves one door explicitly open. Judge Rakoff wrote that if counsel had directed Heppner to use Claude, the system might “arguably” function more like a lawyer’s agent under existing precedent. That is not a holding. But it is an invitation — a signal that the analysis is fact-specific and that thoughtful, attorney-supervised integration of AI tools into legal workflows may carry a different legal character than a client independently querying a chatbot.

The better-calibrated takeaway is this: consumer AI use in a legal context can destroy privilege when it resembles voluntary disclosure to a third party outside a confidential, attorney-directed workflow. That is a narrower claim than “AI kills privilege.” It is also, given the scale of casual AI use in legal contexts today, a consequential enough claim to change behavior.


The Bottom Line

United States v. Heppner will be studied and cited for years. Not because it is radical, but because it is clarifying. It takes an old idea — that privilege requires a real attorney, actual confidentiality, and no voluntary disclosure to outside parties — and shows precisely how that idea applies to the way millions of people are now interacting with AI systems every day.

The ruling does not say AI is dangerous. It says that usefulness is not the same as confidentiality, and that convenience is not the same as privilege. Those two sentences, applied honestly to the current landscape, have enormous practical implications.

If you are dealing with something legal, regulated, sensitive, or potentially discoverable, the safest and clearest rule remains what it has always been: talk to your actual lawyer before you talk to your chatbot. Not because the AI cannot help — it may help enormously — but because the moment you type your legal strategy into a public platform without attorney direction, you have potentially walked your most sensitive thinking out of the protected space and into the open.

The feeling of privacy is not nothing. It shapes behavior, it enables candor, and it makes these tools genuinely useful. But a feeling, however powerful, has never been a legal defense. And in courtrooms, what matters is not how something felt — but what it actually was.

More to think on...

A silhouetted person sits at a desk facing six glowing monitors displaying code and data diagrams, with faint human figures standing in the background.
The Ambient

Why is it so difficult to convince someone that something is true? And then — the sharper edge of that same blade: Why is it even harder to convince them that something is false?

Read More »
Bronze statue of a labor leader split between golden light and shadow, with cracks on one side, surrounded by protesting farmworkers and a lone figure looking up.
The Cesar Chavez Revelation: What Dolores Huerta’s Truth Teaches Us About Power, Legacy, and the Icons We Choose to Believe In

The social feeds have been flooded with a particular kind of weight lately — the kind that comes from the collision between who we believed someone to be and who they actually were. The revelation brought forward by Dolores Huerta regarding Cesar Chavez has exposed something far deeper than one man’s legacy. It has held a mirror up to how we assign heroism, how power operates in the shadows, and how long the truth can be buried beneath the movements it would destroy.

Read More »
Editorial illustration of a tall broadcast tower tangled in glowing gold threads that connect to symbols of power—government building, money, military figure, corporate skyscraper, and globe—each thread pulled by faceless hands in suits; a blurred newspaper floats in the foreground.
If “Jews Control the Media” Is a Myth, Why Does Media Bias Feel Real?

It’s not “the Jews” that shape pro-Israel media coverage — it’s money, power, and politics. Specifically: the deep U.S.-Israel military alliance, defense industry money tied to geopolitical conflict, corporate media responding to advertiser and government pressure, and broad political lobbying coalitions that include Christian evangelicals, foreign policy hawks, and strategic thinkers — not just Jewish Americans. The conspiracy theory is actually a distraction from the real critique. The moment you blame an ethnic group, you stop examining the corporations, the lobbyists, the defense contracts, and the political alliances that are sitting right out in the open. The bias is real. The explanation isn’t antisemitism — it’s the same institutional power, economic incentives, and geopolitical alignment that shapes American media coverage of any close ally. Follow the money and the alliances, not the ethnicity.

Read More »