Read full article
Legal Technology
Executive Summary2 min summary

Attorney-Client Privilege and AI Tools: What Heppner Means, and Where Indian Law Stands

Consumer AI tools and legal privilege are fundamentally incompatible under both US and Indian law.

The Heppner Decision (US)

  • In February 2026, a US federal court ruled for the first time that communications with a consumer AI platform (Anthropic's Claude, free tier) are not protected by attorney-client privilege.
  • The defendant independently used AI to draft defence strategy documents. The court found: no attorney-client relationship (AI is not a lawyer), no confidentiality (the platform's terms permit data retention and disclosure), and no legal advice purpose (counsel did not direct the use).
  • The court left open whether enterprise AI systems with zero-retention and counsel-directed use might be treated differently.

Indian Law Position

  • Indian privilege operates through two distinct mechanisms: an evidentiary bar under Section 132 BSA (courts cannot compel the advocate to disclose), and an ethical duty under BCI Rules 7 and 17 (advocates must not disclose by any means).
  • Using a consumer AI tool with identifiable client material almost certainly breaches BCI Rules 7/17. Whether the client's evidentiary bar under Section 132 survives the advocate's unauthorised disclosure is genuinely uncertain — no Indian court has addressed it.
  • In-house counsel communications are not privileged at all. The Supreme Court in In Re: Summoning of Advocates (2025) held they fall outside both Section 132 and Section 134.

The Real Risk: Third-Party Production

  • Sections 132 and 134 protect the advocate and client from being compelled to disclose. They do not protect the AI provider.
  • A court can issue a summons directly to Anthropic, OpenAI, or Google to produce your data. The provider cannot claim privilege — it is neither the advocate nor the client.
  • This makes the doctrinal debate about waiver largely academic. The information is obtainable from the AI provider regardless.

What You Should Do

  • Never input privileged or confidential client material into consumer AI tools (ChatGPT free, Claude free, Gemini). The terms permit training on inputs, human review, and disclosure to authorities.
  • Enterprise AI tools with zero-retention and contractual confidentiality are materially different from consumer tools, but no court has confirmed they preserve privilege. Document your due diligence.
  • If directing others to use AI on a matter, document the direction in writing. Establish firm-wide AI policies specifying approved tools and prohibited inputs.

Bottom line

The professional conduct violation from using consumer AI with client material is near-certain. The practical exposure through third-party production makes the doctrinal debate academic. Until the BCI or Indian courts provide guidance, treat consumer AI tools as third parties that create serious confidentiality and production risks.

Read the full analysis

30 min read · Full doctrinal analysis with case citations, interactive tools, and detailed practical guidance.