April 14, 2026

New Sedona Conference Journal Article Argues That Heppner Got AI Privilege Wrong

by Alan Brooks

Vice President, Marketing

Alan is an experienced marketing executive focusing on fast-growth companies. Prior to ILS, he was VP of Marketing at ARCHER Systems. His expertise in eDiscovery... Read more »

Executive Summary

A forthcoming Sedona Conference Journal article challenges a recent federal court decision stripping attorney-client privilege and work-product protection from AI-generated materials, arguing that the ruling fundamentally misunderstands how computational tools interact with privilege law.

  • The article analyzes two conflicting federal court decisions on whether AI use waives privilege
  • Authors argue that courts should treat AI platforms as computational tools, not as third-party recipients
  • The article warns that the Heppner decision could undermine privilege protection for any cloud-based technology

What the Sedona Article Examines

In April 2026, The Sedona Conference Journal released a preprint of an article analyzing two conflicting federal court rulings on whether the attorney-client privilege and work-product protection apply to materials created using generative AI platforms (The Sedona Conference, The Machine Isn’t the Interlocutor: Why United States v. Heppner Gets Privilege Wrong). The article, authored by Bridget Mary McCormack, former Chief Justice of the Michigan Supreme Court and current President of the American Arbitration Association, and Shlomo Klapper, founder of Learned Hand and former federal appellate clerk, addresses what the authors call a foundational error in how courts analyze privilege questions involving AI technology.

The stakes are high for the legal profession. Recent surveys indicate that 69 percent of legal professionals now use AI tools for work-related purposes, a dramatic increase from 31 percent a year earlier (8am, 2026 Legal Industry Report). Additionally, 78 percent of legal professionals report adopting AI within the past two years (Litify, The State of AI in Legal 2025). As AI becomes more integrated into litigation workflows, how courts treat privilege in AI-assisted work will affect virtually every practice area.

The Article’s Central Argument: AI is a Tool, not an Interlocutor

The Sedona article examines United States v. Heppner, a February 2026 decision from the Southern District of New York that rejected the privilege for a criminal defendant who used Anthropic’s Claude AI platform to assess his legal exposure and craft defense strategies. The court ruled that the defendant’s interactions with Claude constituted communications with a third party, thereby nullifying both the attorney-client privilege and work-product protection.

The article highlights this framing as the court’s core mistake. According to the authors, the court anthropomorphized Claude by treating it as a human interlocutor capable of receiving confidential information. The third-party disclosure rule is designed to address specific risks: a person who can testify, be compelled to produce communications, or choose to reveal information to others. AI platforms lack any of these traits.

“Claude is a large language model that generates text by predicting statistically likely tokens in a sequence,” the article explains. “It does not receive information in any sense recognized by the law. It does not hold confidences. Claude cannot be deposed. Claude cannot decide to contact the government.”

One week before Heppner, the Eastern District of Michigan reached the opposite conclusion in Warner v. Gilbarco, holding that generative AI programs are “tools, not persons” and that using them does not waive work product protection. The Sedona article supports Warner’s approach as consistent with settled privilege doctrine.

The Article’s Concerns about Heppner‘s Reasoning

The Sedona article warns that Heppner’s reasoning extends far beyond AI platforms. The court found that privilege was lost because Anthropic’s terms of service permit data collection, potential use for training, and sharing with third parties, including government agencies. However, these terms are essentially the same as those used by Google Docs, Microsoft OneDrive, Gmail, and all major cloud platforms that attorneys rely on daily.

According to Heppner’s logic, the article contends, any attorney who creates a privileged memorandum in a cloud-based word processor or stores privileged documents in cloud storage has disclosed that information to a third party. Although no court has ruled that technical processing by a service provider’s servers amounts to voluntary third-party disclosure that forfeits privilege, Heppner’s terms-of-service analysis would encompass all these scenarios.

The article also highlights issues with Heppner’s work product analysis. The court ruled that materials a client creates without an attorney’s guidance are not work product. While this may align with narrow criminal work product standards, the article points out that it creates an imbalance: prosecutors can use AI for case preparation with full protection under the deliberative process privilege, while defendants lose that protection when using the same technology.

The Functional Framework the Article Proposes

Instead of treating AI as fundamentally different from other technologies, the Sedona article proposes a functional test: courts should determine whether a specific technology interaction creates the particular risks the third-party disclosure rule addresses. Does the interaction place information in the hands of a recipient with independent legal standing who can testify, be compelled to produce documents, or choose to disclose information? If not, there is no third-party disclosure, and privilege remains protected.

This framework is technology-neutral and scalable. As AI capabilities become integrated into word processors, email platforms, and every layer of the legal technology stack, courts will not need to draw arbitrary lines between “AI platforms” and “software.” The functional test works regardless of computational sophistication because it focuses on agency and legal capacity, not technical complexity.

The article bases this approach on existing authority. The American Bar Association’s Formal Opinion 477R holds that attorneys may use cloud computing services without breaching confidentiality obligations, reasoning that a service provider’s technical ability to access data does not constitute third-party disclosure. State bar ethics authorities have consistently reached the same conclusion.

What the Article Means for Plaintiff Attorneys

For plaintiff attorneys who increasingly rely on AI for case preparation, document review, and legal research, the Sedona article offers both warnings and guidance:

Uncorrected, Heppner’s new rule will have broad and concerning consequences. The court’s holdings, specifically that a service provider’s terms of service destroy confidentiality, that using a non-attorney technology tool destroys privilege, and that client-initiated preparation is not work product, would cause significant disruption to the attorney-client relationship. Consider where they lead.

Under Heppner’s reasoning, a client who stores a privileged memo in any of these services has disclosed it to a third party. A lawyer who drafts a privileged memorandum in Gmail has disclosed it to Google, or in Outlook has disclosed it to Microsoft, and opposing counsel can subpoena it.

The Machine Isn’t the Interlocuter, pp.15, 16.

Most importantly, the article stresses that established privilege doctrine already provides the tools courts need to analyze these issues properly. Treating AI platforms as computational infrastructure rather than human interlocutors aligns AI with decades of precedent governing cloud technology, preserves the core purpose of privilege protection, and offers courts a durable framework as these tools become widespread in legal practice.