Freemodel-agnosticcorporategeneral-us

Anti-Hallucination System Prompt Rules for M&A Legal AI

Lock in system-level anti-hallucination rules so the AI flags uncertainty about legal standards, precedents, or market practice rather than fabricating plausible-sounding answers.

The Prompt

model-agnostic prompt
Injection Rules. Anti-hallucination instructions are embedded at the system level—they cannot be overridden by user prompts. The AI is instructed: “If you are uncertain about any legal standard, precedent, or market practice, state that you are uncertain rather than generating a plausible-sounding answer.”

The consequences of getting this wrong are career-ending. In 2025 alone, multiple attorneys faced sanctions for submitting AI-generated documents containing fabricated citations. Purpose-built legal AI tools with anti-hallucination safeguards are not a luxury—they are malpractice prevention. For more on avoiding sanctions, see our guide on AI hallucinations in legal work.

Expected Output

A system-level instruction block that forces the AI to disclose uncertainty about legal standards, precedents, citations, or market practice instead of generating plausible but unverified output.

Originally featured in: AI for M&A Lawyers: Due Diligence, LOI Drafting & Deal Room Automation (2026)

Related Prompts