News and Insights

How the Judiciary Can Minimize AI Risks in Secondary Sources

News and Insights

How the Judiciary Can Minimize AI Risks in Secondary Sources

PublicationOctober 31, 2024Law 360

Until now, American courts have focused how to protect themselves from the use of AI by lawyers who cite non-existent cases that were “hallucinated” by generative artificial intelligence tools.  In their timely Law 360 article,  John S. Siffert and Allison Morse discuss the potential risk that generative AI poses to secondary sources that lawyers and judges rely upon. The authors propose steps that the judiciary can take to inoculate itself from hallucinations hibernating in authoritative treatises and articles cited by counsel.

The timeliness of this article is highlighted by the current debate on whether courts should issue standing “AI Orders” requiring lawyers to certify the validity of the cases they cite, with some courts determining that AI Orders are redundant to Rule 11. Even if Rule 11 and AI Orders are put in place, neither would reach the risk posed by hallucinations imbedded in secondary sources, since counsel cannot be expected to certify that a treatise is or is not accurate in its conclusions, and publishers are not counsel of record in cases before the court.  Instead, the article proposes that the judiciary engage publishers of secondary sources to adopt a certification requirement for the treatises they publish. This would give the publishers a role of guardian of the gate and help to protect the judiciary from the demons of AI hallucinations that may be hiding in secondary sources.  

The article is available here.