Executive Summary
Generative AI has moved from experimental curiosity to professional standard in legal practice, and plaintiff attorneys who have not updated their approach to prompting, validation, and AI governance now risk falling behind both their peers and their professional obligations.
- AI adoption among legal professionals has surged, with many law firm professionals now incorporating AI tools into their daily workflows
- Effective AI use has evolved beyond crafting individual prompts to managing context, agentic workflows, and multi-step task delegation
The Adoption Curve Is High
The pace of AI change is exhausting, to say the least. I published an article on GenAI prompting for legal professionals in July 2025, and parts of the article are already obsolete.
The question of whether to incorporate generative AI into legal practice has been settled. According to Clio’s annual legal industry research, 79% of legal professionals now incorporate AI tools into their daily work, a dramatic surge from just 19% in 2023, with large firm adoption reaching as high as 87% (Clio, 2025 Legal Trends Report). Plaintiff attorneys who treat AI as optional are operating at a structural disadvantage relative to well-resourced defense teams already deploying these tools at scale. The question has shifted from whether to adopt AI to how to use it with precision, judgment, and professional accountability.
What Still Works: Core Prompting Principles
The foundational principles that made generative AI useful a year ago remain valid. Natural language communication outperforms keyword-based queries because it allows attorneys to convey intent, context, and nuance in a single prompt. Structured frameworks that define the AI’s role, supply relevant case background, issue clear instructions, set parameters, and establish evaluation criteria continue to produce stronger outputs than vague or open-ended requests.
Specificity and iterative refinement remain essential. An attorney conducting eDiscovery who instructs the AI to identify all witnesses who discussed a manufacturing defect and present the findings in a table sorted by date, with document reference numbers, will consistently outperform one who poses the same question without specifying format or scope. Details improve output, and follow-up prompts deepen analysis.
What Has Changed: From Prompting to Context Engineering
While individual prompt quality still matters, the field has moved to a higher level of abstraction. Context engineering, a discipline formally described by Anthropic in late 2025, reframes the central question from how to phrase a prompt to what information the AI should have access to at any given moment (Anthropic, Effective Context Engineering for AI Agents). For plaintiff attorneys, this means systematically curating documents, case history, jurisdiction-specific standards, and prior outputs. Hence, the AI operates on the right information throughout an entire workflow rather than in a single exchange.
Purpose-built legal AI platforms have emerged to support this kind of structured, context-rich work, and they increasingly reduce the burden of prompt construction by incorporating legal domain knowledge by default. Understanding which platform is appropriate for a given task has itself become a core competency.
Agentic AI Has Entered the Legal Workflow
The most consequential development of the past year is the emergence of agentic AI: systems that do not merely answer a single question but plan, execute, and chain multi-step tasks autonomously.
As agentic AI makes its way into eDiscovery platforms, rather than asking the AI what documents reference a particular witness, attorneys may be able to delegate an entire eDiscovery investigation, directing the system to identify the witness, locate all related documents, summarize each one, flag inconsistencies, and produce a timeline within a single delegated task.
This shift from prompt-and-respond to outcome-delegation changes what attorneys need to manage. Human oversight remains essential. The more autonomous the AI, the more deliberate the attorney must be in defining the boundaries of the task, reviewing outputs, and ensuring results reflect sound legal judgment rather than unchecked machine inference.
Validation Is a Professional Obligation, Not a Suggestion
Attorneys using generative AI must understand where these systems hallucinate, omit, or mischaracterize source material. Validation is critical and not a single step. AI models change rapidly, and what worked in the last version may not work in the next. Additionally, long or complex prompts on some models can confuse users, leading to missed instructions.
It means confirming that documents identified through AI-assisted eDiscovery contain what the system claims, that characterizations are accurate, and that outputs reflect the totality of the record rather than a convenient subset. This obligation falls to the attorney at every stage, regardless of the sophistication of the tool being used.
Conclusion
The plaintiff attorneys who will lead in AI-assisted litigation are not those who use AI most frequently, but those who use it with persistence and focus. Strong prompting habits, context-aware workflows, human oversight of agentic systems, and rigorous validation together form the foundation of competent AI practice in 2026. That foundation must be built now, as the tools, the courts, and the regulators are all moving quickly.
You might also like:
GenAI Prompting for Legal Professionals
The Modern Attachment Revolution: How Cloud Links Are Reshaping Electronic Discovery
The Evolution of Cloud Forensics: Challenges and Solutions in Cloud-Based Investigations
