Source: site

What courts are actually requiring
Several federal judges (and some state judges) have issued standing orders that explicitly address generative AI use in briefing and other submissions.
Common themes across these orders include:
-
Disclosure of AI use
-
Parties must state whether a generative AI tool was used in preparing any filing.
-
Some orders distinguish research-only use (e.g., Westlaw, Lexis) from drafting tools like ChatGPT, Claude, or similar.
-
-
Identification of the specific tool
-
Lawyers must name the tool used (for example, ChatGPT‑4, Claude, Spellbook) rather than referencing generic “AI software.”
-
-
Human review and verification
-
Counsel must certify that a human attorney has reviewed the filing and verified all factual assertions and citations, and that AI output has not been accepted uncritically.
-
-
Scope description
-
In some districts (e.g., New Jersey and certain divisions of North Carolina), orders require describing which portions of a filing were AI‑assisted and how that output was integrated into the final work product.
-
-
Treatment of research platforms
-
Many orders exempt traditional, non‑generative tools such as Westlaw and Lexis, while treating open‑ended, large‑language‑model systems as “generative AI” that triggers disclosure.
-
Illustrative federal examples
Across districts, the pattern is similar even though the details vary.
-
Northern District of Texas
-
Judge Brantley Starr requires lawyers to certify whether generative AI was used and, if so, that a human attorney verified all statements and citations.
-
-
Eastern District of Pennsylvania
-
Judge Michael Baylson demands affirmative disclosure of any generative AI use in drafting filings, along with confirmation that legal authorities were personally checked.
-
-
District of New Jersey
-
Judge Evelyn Padin requires disclosing the specific tool, describing how AI was used in connection with submissions, and certifying human review.
-
-
Western District of North Carolina (Charlotte Division)
-
A standing order requires either (1) certifying that no generative AI was used (except traditional research platforms) or (2) certifying that every statement and citation has been independently verified.
-
-
Northern District of California (selected judges)
-
Some judges allow AI use but require that AI‑assisted documents be clearly identified (through notation in the title, preliminary table, or a separate notice) and emphasize that attorneys alone bear ethical responsibility for all content.
-
Law libraries and practice guides are now actively tracking these orders and note that “declaration or disclosure” of gen‑AI usage is becoming a standard expectation in many federal courtrooms.
Consequences for non‑disclosure or misuse
Courts are clear that “I used AI” is not a defense to violations; instead, undisclosed or careless use can aggravate sanctions.
Key consequences include:
-
Rule 11 sanctions
-
If AI‑fabricated citations, cases, or facts appear in filings, judges can impose monetary sanctions, order attorney‑fee shifting, or require corrective filings under Rule 11 or comparable rules of professional conduct.
-
-
Striking filings or denying relief
-
Courts may strike non‑compliant submissions, refuse to consider them, or deny motions where AI‑generated content is unreliable or uncertified.
-
-
Adverse credibility inferences
-
Misrepresenting or omitting AI use in the face of a standing order can undermine counsel’s credibility and may influence the court’s assessment of factual disputes.
-
-
Ethical and disciplinary exposure
-
Bar authorities can treat reckless reliance on AI (without verification) or false certifications about AI use as violations of duties of competence, candor to the tribunal, and supervision of technology.
-
Several federal decisions and guidance documents emphasize that these requirements flow from existing duties of candor, competence, and reasonable supervision of technology, rather than creating entirely new ethical regimes.
Why this intersects with privilege and evidence
In parallel with disclosure rules, recent federal rulings—such as United States v. Heppner in the Southern District of New York—have held that a client’s chats with public AI tools (e.g., using a consumer model to draft “defense strategy” documents) are generally not protected by attorney‑client privilege or the work‑product doctrine.
Courts have reasoned that:
-
An AI system is not an attorney, and no fiduciary or disciplinary relationship exists.
-
Public AI platforms often reserve broad rights to store prompts and outputs, use them for training, and disclose them to third parties (including regulators), defeating any reasonable expectation of confidentiality.
-
Sending non‑privileged AI materials to counsel later does not retroactively cloak them with privilege.
That line of cases complements the disclosure trend: if AI‑generated content is not privileged and courts are skeptical of its reliability, they are more comfortable demanding explicit disclosure of AI use and treating failures harshly.




