U.S. Courts Navigate Internal AI Adoption as Caseloads Surge
Over 60 percent of federal judges have used AI at least once.
Akul Saxena
STANFORD, Calif., April 16, 2026 — Artificial intelligence is arriving in American courtrooms before the rules governing it, as state judges experiment with AI tools to manage overwhelming dockets and lawyers face consequences for submitting fabricated citations.
The pressure on American courts was the backdrop Thursday at Stanford Law School's Codex conference, where panelists described judges managing more than 100 cases some mornings and three-quarters of litigants appearing without a lawyer.
Judges and lawyers are turning to AI to help manage the load, but the risks are documented. At least 1,000 incidents of lawyers submitting briefs with fabricated citations are on the record in federal courts, said University of Colorado Law professor Harry Surden. Eight of ten state court hearings are delayed, added Shlomo Klapper, founder of legal-AI platform Learned Hand,

The consequences have reached federal appeals courts. In United States v. Johnson-Ferris, the Sixth Circuit Court of Appeals reprimanded a lawyer whose AI tool cited real cases but invented quotes inside them. The court vacated the result and appointed new counsel for the defendant, said Hon. Yvonne Campos, a San Diego County Superior Court judge.
A California appellate court in March reprimanded a trial judge for signing an order without checking citations the parties had submitted, Campos said. Campos v. Munoz, 118 Cal. App. 5th 1112, put judges on notice that signing off on AI-generated work without checking it first is grounds for discipline, she said.
Campos described state court adoption as deeply uneven. A survey of 135 San Diego County judges drew a 10-to-12 percent response rate, with most unwilling to discuss AI use, and several who tried gave up after the tools produced inaccurate information, she said.
Over 60 percent of federal judges have used AI at least once in their work, Surden added.
Courts at capacity
As AI reduces the cost of legal services, litigation volume is expected to increase rather than decrease, Klapper said. He noted that federal court filings have grown 346 percent since 2004 with no corresponding increase in judges.
Courts have issued guidance faster than they have built infrastructure to support it, said Erica Yew, chief executive of the American Leadership Forum Silicon Valley.
California's Judicial Council, the California State Bar, and the American Bar Association have all issued standards covering disclosure, bias prevention, and fee structure, she said.
Deepfakes in the courtroom
Two categories of AI evidence problems are reaching courtrooms, Yew said. The first is AI-generated video submitted as genuine proof. The second is genuine evidence that opposing parties falsely claim is a deepfake.
In an Alameda County case, heard in the county's Superior Court in Oakland, self-represented plaintiffs repeatedly submitted a video of someone reciting an unavailable witness's words as genuine testimony, Yew said. The judge dismissed the case as a sanction, she said.
As deepfake technology improves, human review of disputed video will become unreliable, Yew warned. To address this, the same university partnership model courts use in other high-stakes evidentiary contexts could apply to AI-generated evidence, she said. Existing frameworks in family law and criminal court already rely on expert panels available to judges on request.
AI where it works
Yew noted that AI adoption is accelerating in access-to-justice applications. Florida has deployed an AI chatbot in multiple languages to walk self-represented litigants through court filings, and other states are developing similar programs, Yew said. Progress is visible at the national level too: the National Center for State Courts has built a practice environment where judges can test AI tools before using them in court, she said.
Member discussion