Subscriber Benefit
As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe NowAs lawyers, we are bound by the rules of professional responsibility, which now extend to how we use artificial intelligence. But a question looms: Do we need to disclose our use of AI legal assistants?
It’s tempting to think of AI legal assistants as just another option, like a paralegal or an associate helping summarize medical records or draft contracts. The line between human assistants and machines is dwindling as technology becomes increasingly sophisticated. As such, it will change how we comply with our ethical obligations.
AI legal assistants, unlike generic GPTs, are tools specifically designed to augment and delegate legal professionals’ work. It’s old news that AI can perform shoddy legal research and summarize medical records at a law student’s level of rigor. Yet it’s now worth considering that legal-specific technology is evolving into fully fledged legal assistant territory.
Companies such as Thomson Reuters and Ironclad are developing tools that can now suggest fruitful deposition questions, flag conflicts of interest, draft documents, redline contracts, and engage in laborious e-discovery tasks. At some point, AI becomes more than ministerial and should be disclosed to clients and the courts.
The Indiana Rules of Professional Conduct require attorneys to provide appropriate instructions and supervision to nonlawyer assistants. Ind. Professional Conduct Rule 5.3. This also applies to ethical considerations, such as ensuring that the conduct of nonlawyer staff is compatible with professional obligations.
AI legal assistants function similarly — they provide support but cannot independently assume responsibility. The lawyer must ensure the AI’s output aligns with professional standards. Legal pundits and dabblers like myself have written ad nauseam about the importance of ethical use of AI, but there is little writing on the duty to disclose such use.
Our ethical rules only guide a lawyer’s duty to inform others about nonlawyer assistants. Prof. Con. R. 9.4. And that only relates explicitly to telling others that the nonlawyer assistant is unlicensed to practice law. An attorney’s candor toward the tribunal is limited to making false representations of fact or law, failing to disclose controlling authority, and offering false evidence. Prof. Con. R. 3.3. The Model Rules of Professional Conduct don’t provide much additional guidance and only serve to distinguish the use of nonlawyer assistants within a firm versus outside of a firm.
Some courts are filling this gap. Federal district courts in Illinois, Texas, Pennsylvania, Montana, Ohio, New York, Missouri, New Jersey, and Hawaii have issued orders requiring the disclosure of AI use in submissions to their courts.
The scope of these orders varies. For example, Magistrate Judge Fuentes of the Northern District of Illinois requires litigants to disclose the use of AI, the specific tool, how it was used, and if AI was used at all in their submissions. Less stringent courts, such as the United States District Court of Hawaii, only require a declaration advising the court whether the party has relied on one or more unverified sources and confirms they’re not fictitious.
Outside the tribunal, we’re on our own in deciding what to tell our clients. A 2023 Bloomberg Law survey found that only 14% of respondents admitted using AI in their work. I suspect that number is much more significant because of the perceived shame in admitting such a tool is being used.
Instead of hiding behind the shame, our profession should embrace the tools because transparency with clients is a cornerstone of ethical practice.
If your client assumes that a human is conducting legal research or drafting pleadings, are you obligated to disclose that an AI tool played a role? The answer depends on whether the use of AI affects the quality or cost of the service provided. If the AI’s involvement materially impacts the client’s interests, disclosure may be required under the duty of communication. For instance, if AI dramatically reduces the time spent on a task, clients might expect to see those savings reflected in their bills.
Not every use of AI requires disclosure. Routine tasks, like spell-checking or formatting, likely fall below the threshold. However, functions that impact substantive legal work — like drafting arguments, analyzing case law, or summarizing evidence—may require transparency. The key question is whether the AI’s involvement materially affects the service provided or the outcome of the matter.
The simplest solution? When in doubt, disclose. Disclose in your engagement letters, emails, over the phone, or however your circumstances require. Whether to clients or courts, transparency protects your ethical standing and reinforces trust in your work. AI is a powerful tool, but it’s just that—a tool. Its use must align with the ethical framework that defines our profession. By openly acknowledging when and how we use AI, we meet our ethical obligations and demonstrate our commitment to integrity in an evolving legal landscape.•
__________
Conner Dickerson is a member of the Business Services & Litigation and Real Estate Services & Litigation practice groups at Cohen & Malad. Opinions expressed are those of the author.
Please enable JavaScript to view this content.