What Happens When You Ask AI for Legal Advice And Then Get Arrested
A landmark federal ruling out of the Southern District of New York has a clear message for anyone using AI tools in connection with legal matters: your conversations are probably not protected.
Imagine you’re under federal investigation. Your lawyers are expensive. The clock is ticking. So you open up a free AI chatbot, type out your version of events, and start strategizing — fully intending to share everything with your attorney later. Sounds reasonable, right?
That is essentially what Bradley Heppner did. And on February 10, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York ruled that every word of those AI conversations could be used against him.
This is the first ruling of its kind in the country, and it has serious implications — not just for criminal defendants, but for anyone who blends AI tools into their legal affairs.
Key takeaways
- A litigation attorney handles civil disputes and represents clients in court proceedings.
- Civil litigation deals with conflicts between individuals or businesses, not criminal cases.
- Most litigation work focuses on case prep, evidence review, and settlements rather than trials.
- Common cases include business disputes, employment issues, personal injury, contracts, and real estate.
- Hiring a litigation attorney early helps protect rights and ensures deadlines and procedures are met.
The Case: Securities Fraud, an FBI Raid, and 31 AI Chat Logs
Heppner was indicted in October 2025 on charges of securities fraud, wire fraud, and falsifying corporate records stemming from his time as an executive at GWG Holdings. When FBI agents executed a search warrant at his home, they walked out with roughly 31 printed documents — transcripts of conversations Heppner had with Claude, the AI platform built by Anthropic.
According to Heppner’s lawyers, those conversations were deliberate. Heppner had used Claude — on his own, without being directed by counsel — to draft what amounted to defense strategy memos: outlines of factual arguments, potential legal theories, his own interpretation of the evidence. He later shared the AI-generated documents with his attorneys.
His legal team argued those documents deserved protection under attorney-client privilege and the work product doctrine. The government disagreed. And so did Judge Rakoff.
Why the Court Said No — Three Times Over
Attorney-client privilege has three essential requirements: a communication between client and attorney, kept confidential, for the purpose of legal advice. Heppner’s AI conversations failed on every single one.
1. Claude Is Not a Lawyer
This one might seem obvious, but Heppner’s team argued it shouldn’t matter — that Claude functioned like a glorified word processor, a tool rather than a party to the conversation. Judge Rakoff rejected that framing entirely. He emphasized that every recognized legal privilege requires a genuine human relationship built on trust, with a licensed professional bound by fiduciary duties and professional discipline. An AI platform, however sophisticated, is none of those things.
2. There Was No Confidentiality — The Privacy Policy Said So
This is the part that should make every business executive, entrepreneur, and individual user sit up straight. Claude’s privacy policy — the one users agree to before they ever type a single word — expressly reserves Anthropic’s right to share user data with third parties, including government regulatory authorities, and to use user inputs to train the AI. By agreeing to those terms, the court held, Heppner had effectively waived any reasonable expectation of confidentiality.
The judge drew a sharp distinction: if a client writes private notes and later shares them with an attorney, those notes may remain privileged. But Heppner had first shared his thoughts with a third party — Claude — and that changed everything. The equivalent of whispering your legal strategy to a stranger in a coffee shop and then repeating it to your lawyer doesn’t make the coffee shop conversation privileged.
3. The Purpose Was Off — He Was Talking to Claude, Not to His Attorney
The third element asks whether the communication was made for the purpose of obtaining legal advice. Judge Rakoff acknowledged this was the closest call of the three — after all, Heppner’s stated goal was to develop material to share with counsel. But the court focused on intent at the moment of the conversation, not after the fact. Heppner was seeking information or analysis from Claude, not from a lawyer. And since Claude itself expressly disclaims providing legal advice, any expectation to the contrary was simply unreasonable.
The Work Product Doctrine Didn’t Save Him Either
The work product doctrine protects materials prepared by or at the direction of an attorney in anticipation of litigation — it’s designed to shield a lawyer’s mental processes and trial strategy from discovery. It’s a different protection than attorney-client privilege, but it failed Heppner for a similar reason.
The court found that the AI documents were not prepared at counsel’s direction — Heppner generated them entirely on his own initiative. More importantly, during oral argument, his own lawyers confirmed the documents did not reflect defense counsel’s strategy at the time they were created (even if they later influenced it). That distinction — between reflecting strategy and affecting it — turned out to be dispositive.
What This Means for You
While this ruling arose in a criminal case, its logic has legs far beyond that context. Civil litigants, business executives, startup founders, and anyone facing regulatory scrutiny should take note. Here is what we see as the core practical lessons:
- Free AI tools are not confidential: The consumer versions of ChatGPT, Claude, Gemini, and similar platforms all include privacy policies that permit the platform to access, review, and potentially share your inputs. If you use these tools to process sensitive legal, business, or personal information, treat those conversations as potentially discoverable.
- Enterprise AI tools may offer stronger protections: Judge Rakoff explicitly left open the question of whether enterprise-grade AI platforms — which often include contractual confidentiality assurances and no-training commitments — might support a different outcome. This is a meaningful distinction worth exploring with counsel if AI is central to your workflow.
- Attorney direction matters: The court suggested that using an AI tool at the explicit direction of your attorney — as part of the attorney’s process — might look very different under this analysis. If your legal team is incorporating AI into their work on your behalf, that is a conversation worth having with them now.
- Going it alone creates risk: Heppner’s fundamental error was not using AI — it was doing so without counsel’s involvement or oversight. Taking unilateral steps to build your own legal strategy using publicly available AI tools, especially when you are already a litigation target, is a recipe for exactly this kind of outcome.
- This ruling is persuasive, not binding — for now: Other courts are not obligated to follow Judge Rakoff’s reasoning. But he is one of the most respected federal judges in the country, and this opinion will be influential. Expect it to be cited widely.
The Bigger Picture
We are at a genuinely early and uncertain moment in the intersection of AI and law. Courts are working through questions that the drafters of attorney-client privilege doctrine never imagined — and they are reaching conclusions that may surprise even sophisticated users of these tools.
The Heppner case is a reminder that the law protects deliberate, structured relationships — not just good intentions. A conversation with an AI, however thoughtful or legally-minded, is still a conversation with a piece of software operating under a corporate privacy policy. Until the legal framework evolves further, that distinction carries real consequences.
If you or your business are navigating litigation, regulatory scrutiny, or any situation where legal strategy is being developed, the most important step is to consult a trusted New York City law firm before turning to AI for guidance. A qualified legal team can provide accurate, case-specific advice, protect your interests, and help you avoid costly mistakes that automated tools simply can’t foresee.
Have Questions About AI and Legal Privilege?
At Pierce & Kwok LLP, we stay ahead of the evolving intersection of technology and law so our clients don’t have to learn the hard way. Whether you’re facing litigation, building a business, or simply want to understand your exposure, we’re here to help.