Freeclaudelitigationgeneral-us

AI Prompt: Summarize Appellate Decision into IRAC Memo (Claude vs Gemini Test)

Turn an appellate decision into a structured legal-research memo with holding vs dicta clearly distinguished. Tested on Claude vs Gemini — accuracy vs context tradeoff.

The Prompt

claude prompt
(identical for both models):

"Summarize this appellate court decision in a structured format suitable for a legal research memo. Include: (1) Case name and citation, (2) Court and date, (3) Procedural history, (4) Facts (material facts only), (5) Issue(s) presented, (6) Holding, (7) Reasoning (key points of the court's analysis), (8) Dicta (any notable statements not essential to the holding), (9) Practical implications for [construction litigation attorneys]. Clearly distinguish between the court's holding and its dicta."

Claude's output: Claude produced a meticulously structured summary that clearly delineated each section. The distinction between holding and dicta was sharp -- Claude identified two paragraphs in the opinion as dicta and explained why they were not essential to the court's decision. The reasoning section traced the court's logic step by step, noting where the court adopted or departed from prior precedent. The practical implications section was specific and actionable, identifying three ways the decision could affect pending construction litigation cases. The summary read like a well-crafted legal memo -- the kind of work product you would expect from a strong second-year associate.

Gemini's output: Gemini's summary was comprehensive but less precise in distinguishing holding from dicta. It identified the holding correctly but categorized some dicta as part of the reasoning. The factual summary was accurate and concise. Where Gemini added value was in the practical implications section -- using Google Search grounding, it referenced two other recent decisions from the same circuit that addressed related issues, providing useful context for how this decision fits into the broader trend. However, one of these external references contained a minor inaccuracy in the described holding (the case was real, but the holding was slightly misstated).

Verdict -- Test 5: Claude wins on accuracy, Gemini adds context value. Claude's analytical precision in distinguishing holding from dicta is superior and directly affects legal analysis quality. But Gemini's ability to surface related decisions from the same circuit -- when accurate -- provides contextual value that Claude cannot match without external tools. The caveat is critical: Gemini's external references must be independently verified.

Expected Output

A 9-section legal-research memo (caption, court/date, procedural history, facts, issues, holding, reasoning, dicta, practical implications) with holding and dicta sharply distinguished and tailored to the named audience.

Usage Notes

Tested on Claude and Gemini: Claude was more precise at separating holding from dicta; Gemini's grounding surfaced related circuit decisions but introduced a minor citation inaccuracy. Always verify any external case references before relying on them in client work.

Originally featured in: Claude vs Gemini for Lawyers: Which AI Is Better for Legal Work in 2026?

Related Prompts