How Large Language Models Work—And What That Means for Courts
AI for ALJs
Content Credit: Klapper
1. Why AI Matters in the Courtroom
AI is already being used by lawyers and is being pitched to courts.
There have been instances of fabricated citations in legal briefs and opinions due to AI tools.
Judges and clerks need to understand what AI can and cannot do, how to spot risks, and when to trust AI outputs.
2. Mission-Driven Approach
The central question is not whether AI is powerful, but whether it advances the mission of the court: delivering justice justly, fairly, and efficiently.
Technology should serve the needs of the court, not the other way around.
3. Understanding the Bottleneck
Courts face overwhelming caseloads, limited clerical support, and rising numbers of self-represented litigants.
The real problem is delivering timely, quality justice with limited resources, not simply adopting AI for its own sake.
4. AI as a Potential Solution
AI can help by organizing complex records, drafting routine orders, flagging procedural defects, and freeing up time for more complex judgment calls.
However, AI is not always the right solution—other options like more clerks or procedural reforms may be preferable.
5. How Large Language Models (LLMs) Work
LLMs predict the next word in a sequence, learning from vast amounts of text.
Their power comes from compressing knowledge into abstractions, which allows them to summarize, translate, and identify analogies, but not to memorize or search like a database.
6. The Hallucination Problem
LLMs can generate plausible but false information (“hallucinations”) because they are optimized for fluency, not accuracy.
This is a fundamental architectural issue, not just a user error, and can undermine justice if not properly managed.
7. Reasoning vs. Mimicry
LLMs sometimes appear to reason but also make mistakes no human would make.
They operate using learned heuristics at scale—something between pattern-matching and true reasoning.
8. What Makes Judicial AI Different
Judicial AI must be built with constraints: evidence-linking, procedural grounding, auditability, and multi-agent architecture.
These constraints ensure accuracy, transparency, and reviewability, which are essential for serving the mission of the courts.
9. Evaluating AI Tools
Before adopting any AI tool, courts should ask:
What problem does this solve?
Does it advance justice, fairness, or efficiency?
What are the risks and alternatives?
Red flags include lack of citation, vague answers, resistance to procedural constraints, and lack of transparency.
Green flags include pinpoint citations, transparent reasoning, and clear articulation of limitations and mission advancement.
10. Ethical Considerations
Judges must maintain independence and avoid over-reliance on AI.
When reviewing AI-generated content from lawyers, verify citations and be alert for unedited AI output.
Good uses of AI include summarizing records, organizing facts, and flagging procedural defects; inappropriate uses include making final decisions without human review or anything requiring credibility determinations.
11. Questions for Vendors
Focus on mission alignment, transparency, data handling, deployment, and clear limitations.
Always ask what the tool does not do well and what human review is required.
12. Looking Forward
Judicial AI is rapidly being adopted, but best practices and legal questions are still evolving.
The field is moving quickly, with new guidelines, ethics opinions, and case law emerging.
13. Key Takeaways
Mission comes first: technology must serve justice, fairness, and efficiency.
LLMs are powerful but require constraints for judicial use.
Evaluate tools by their ability to solve real problems, advance the mission, and provide verifiable, auditable outputs.