News and Insights

AI Evidence Rule Tweaks Encourage Judicial Guardrails

News and Insights

AI Evidence Rule Tweaks Encourage Judicial Guardrails

PublicationDecember 10, 2025Law360

AI-generated evidence is coming to the federal courts– and machines cannot be cross-examined.

The Judicial Conference’s Advisory Committee on Evidence Rules has released a proposed Federal Rule of Evidence 707 to regulate the admissibility of machine-generated evidence in federal courts. That includes AI-outputs. Algorithmic tools are becoming more common in criminal investigations and civil litigation.  As a result, judges are increasingly asked to evaluate the reliability of machine-produced information.

In their newest Law360 article, John Siffert, Jillian Berman, and Cindy Kuang examine how Proposed Rule 707 would apply a Daubert-style reliability framework to such evidence.  They address the question:  what could this mean for trial practice?

Unlike expert testimony admitted under Rule 702, machine-generated outputs may reach a jury without any witness capable of explaining the underlying data, methodology, or reasoning. That gap raises the risk that jurors will either overly credit or overly discount AI-outputs as evidence. The Evidence Rules Committee aims to mitigate this risk in two key ways:

First, the Draft Committee Note to Proposed Rule 707 emphasizes that courts must rigorously apply the Daubert factors before admitting machine-generated evidence. The Committee Note expressly anticipates that, in many cases, proponents will not be able to satisfy Rule 707’s reliability requirements without offering an accompanying expert witness to explain or defend the machine’s operation.

Second, the Committee expects judges to provide meaningful guidance to jurors. The Draft Committee Note proposes jury instructions cautioning that machine-generated evidence is subject to error and should not be presumed reliable.

With the public comment period open until February 16, 2026, Proposed Rule 707 offers an important opportunity for practitioners, judges, and other stakeholders to help shape the standards that will govern AI-generated evidence in federal trials.