April 6, 2026

Federal Judges Navigate AI Adoption: New Sedona Conference Study Shows Cautious Exploration

by Alan Brooks

Vice President, Marketing

Alan is an experienced marketing executive focusing on fast-growth companies. Prior to ILS, he was VP of Marketing at ARCHER Systems. His expertise in eDiscovery... Read more »

Executive Summary:

A groundbreaking survey of 112 federal judges reveals that while a majority have experimented with artificial intelligence tools in their judicial work, adoption remains infrequent and uneven, with judges expressing equal measures of optimism and concern about AI’s role in the judiciary.

  • More than 60% of federal judges have used AI tools, but only 22.4% use them weekly or daily in their work
  • Legal research platforms with integrated AI features see significantly higher adoption than general-purpose AI tools
  • Nearly half of responding judges report that their courts have not provided AI training, despite strong attendance when training is offered

Landmark research provides first comprehensive view of judicial AI use

The Sedona Conference Journal has published the first random-sample survey examining how federal judges use artificial intelligence in their judicial work. Conducted by Northwestern University professors Daniel W. Linna Jr. and V.S. Subrahmanian in collaboration with the New York City Bar Association Presidential Task Force on Artificial Intelligence and Digital Technologies, the research provides insights for litigation professionals navigating an evolving technological landscape in federal courts (Sedona Conference Journal, Artificial Intelligence in Federal Courts: A Random-Sample Survey of Judges).

The study surveyed a stratified random sample of 502 federal judges, including bankruptcy, magistrate, district court, and court of appeals judges, from a population of 1,738 current federal judges. With 112 judges responding, the 22.3% response rate provides statistically significant insights into how the federal judiciary is approaching AI technology.

AI adoption remains limited despite widespread experimentation

Research indicates that over 60% of responding judges have used at least one AI tool in their judicial work, but the frequency of use gives a clearer picture. Only 22.4% of judges reported using these tools weekly or daily, showing that AI has not yet become a routine part of most judicial workflows.

Perhaps more significantly, 38.4% of judges reported that they have never used any of the AI tools identified in the survey in their work. This large percentage of non-users suggests that AI adoption in federal courts is still far from widespread.

The research also revealed notable differences among judge types. Bankruptcy judges had the highest combined daily or weekly usage at 32.2%, compared to 21.9% for magistrate judges and just 13.9% for district court judges.

Legal research platforms dominate judicial AI use

A key insight for litigation professionals is the strong preference judges have for AI tools integrated into established legal research platforms rather than general-purpose AI applications. The AI tool most frequently used by judges was Westlaw AI-Assisted or Deep Research at 38.4%, while ChatGPT was used by 28.6% of respondents.

This pattern reflects what the researchers describe as vendor familiarity and perceived reliability influencing which AI tools judges are willing to use in chambers. Judges reported using “AI for Law” tools more frequently across all usage categories compared to general AI platforms.

The main use of AI in judicial work is for conducting legal research, reported by 30.0% of judges. Document review comes next at 15.5%, while all other individual use cases are below 10%. Notably, only 1.8% of judges said they use AI to make decisions, while 4.5% use it to inform decisions.

Training gaps hinder broader adoption

The research identified a significant gap in AI training availability, which could be limiting broader adoption. When asked if court administration provided training on AI tools, 45.5% of judges answered no, and an additional 15.7% were unsure.

However, when training is available, judges show strong interest. Of the judges who reported receiving AI training, 73.8% attended. This high participation rate highlights a gap in access to quality, judiciary-specific AI education.

Policy landscape remains fragmented

The survey showed significant differences in how judges manage AI use in their chambers. About one in three judges either permits and encourages AI use (7.4%) or just permits it (25.9%). Conversely, 20.4% formally prohibit AI use, and 17.6% discourage it without formally banning it.

Perhaps most revealing, 24.1% of judges reported having no official policy regarding AI use in chambers. The range of chamber policies, from formal bans to clear encouragement, indicates that the judiciary is still in the early stages of AI regulation with no dominant model established.

Judges split on AI potential

The research found that judges are almost evenly divided between optimism and concern about AI’s role in the judiciary. When asked about their overall outlook, 43.6% of judges expressed some optimism, while 41.7% showed some concern.

Judges’ qualitative responses reflected both recognition of AI’s potential to improve efficiency and concerns about hallucinations, “zombie cases,” and skill atrophy. Several judges emphasized that AI should be viewed as a research tool that requires verification, not a shortcut for reviewing materials entirely.

Conclusion

This landmark Sedona Conference study uncovers a pivotal moment for the federal judiciary in its relationship with artificial intelligence. While the research highlights significant AI experimentation among federal judges, it also reveals critical gaps in training, policy consistency, and institutional backing that need to be addressed before AI can unlock its potential to improve judicial efficiency without sacrificing the quality of justice.