As people increasingly turn to artificial intelligence for advice, some U.S. lawyers tell their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line.Â
A federal judge in New York ruled this year that a defendant could not shield his AI chats from prosecutors pursuing charges against him.
After that ruling, attorneys advise that prosecutors in criminal cases — or litigation adversaries in civil cases — could demand conversations with chatbots like Anthropic's Claude and OpenAI's ChatGPT.
People's discussions with their lawyers are almost always deemed confidential under U.S. law but AI chatbots are not lawyers, and attorneys instruct clients to take steps that could keep their communications with AI tools more private.
In emails to clients and advisories posted on their websites, more than a dozen major U.S. law firms outlined advice for people and companies to decrease the chances of AI chats winding up in court.Â
People are also reading…
Similar warnings also appear in some firms' contracts with their clients. For instance, New York-based Sher Tremonte said in a new contract in March: "Disclosure of privileged communications to a third-party AI platform may constitute a waiver of the attorney-client privilege."Â
A screen reads 'AI' Dec. 11 in Palo Alto, Calif.
Judicial ruling
The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. Federal prosecutors charged Heppner last November with securities and wire fraud; he pleaded not guilty.
Heppner used Anthropic's chatbot Claude to prepare reports about his case to share with his attorneys, who later argued his AI exchanges should be withheld because they contained details from the lawyers related to his defense.Â
Prosecutors argued they had a right to demand those materials because Heppner's defense lawyers were not directly involved and attorney-client privilege does not apply to chatbots.Â
Manhattan-based U.S. District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents Claude generated related to the case. No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," Rakoff wrote.
Lawyers for Heppner did not immediately respond to requests for comment. A spokesperson for the U.S. attorney's office in Manhattan declined to comment.
The decision was an important early test in the AI chatbot era for bedrock legal protections governing attorney-client communications and materials prepared for litigation.
Used as a tool
Courts already grapple with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which among other things led to legal filings containing made-up cases invented by AI.Â
The same day as Rakoff's ruling, U.S. Magistrate Judge Anthony Patti in Michigan said a woman representing herself in a lawsuit she brought against her former company did not have to hand over her chats with OpenAI's ChatGPT about the employment claims made in the case.
Patti treated the woman's AI chats as part of her own personal "work-product" for the case, rather than as conversations with a person who her employer could seek to use for its defense.
ChatGPT and other generative AI programs "are tools, not persons," Patti wrote in his order.Â
The privacy and usage terms for OpenAI and Anthropic state they can share data involving their users with third parties. Both also state they require users to consult a qualified professional before relying on their chatbots for legal advice.Â
At a February hearing in Heppner's case, Rakoff noted Claude "expressly provided that users have no expectation of privacy in their inputs."
Representatives for OpenAI and Anthropic did not immediately respond to requests for comment.
Race for guardrails
The advice from lawyers ranged from telling clients to select their AI platforms carefully to suggesting specific language to use in chatbot prompts.
Los Angeles-based O'Melveny & Myers and other firms said in client advisories that "closed" AI systems designed for corporate use could provide stronger protections for legal communications, though they said even that remains largely untested.Â
Some firms said AI legal research is more likely to be protected by attorney-client privilege when it is conducted at the direction of a lawyer. If a lawyer does advise the use of AI, a person should say so in the chatbot prompt, New York-headquartered law firm Debevoise & Plimpton said in a notice on its website.Â
"I am doing this research at the direction of counsel for (X) litigation," the firm suggested people write.
Justin Ellis of New York-headquartered law firm MoloLamken and other lawyers said they expect more rulings eventually will clarify when AI chats can be used as evidence.

