Three Prompts That Cut AI Hallucinations in Half
Found in Claude's own API documentation. Reddit user ColdPlankton9273 tested them — and the results are remarkable.
Prompt Architect
Today's prompt is actually three prompts — a set of instructions that were hiding in plain sight inside Claude's own API documentation. Reddit user ColdPlankton9273 pulled them out and tested them extensively. The result: significantly fewer hallucinations across all major models.
Here they are.
Prompt 1: The Uncertainty Anchor
If you're not sure about something, say so. Don't make up information.
When you're uncertain, say "I'm not confident about this" and explain why.
It's always better to say you don't know than to guess.
Dead simple. But it works because most AI hallucinations happen when the model is uncertain but doesn't have permission to say so. This gives it explicit permission.
Prompt 2: The Source Demand
For any factual claim, ask yourself: "Could I point to a specific source for this?"
If the answer is no, either find one or clearly mark it as your analysis/opinion.
Do not present inferences as established facts.
This is the one that makes the biggest difference. It forces the model into a source-verification loop before stating anything as fact. The model doesn't actually check sources in real-time (unless it has web access), but the framing changes how it generates text.
Prompt 3: The Confidence Label
For technical or factual content, add a confidence indicator:
[HIGH CONFIDENCE] — well-established, widely documented
[MEDIUM CONFIDENCE] — likely accurate but verify independently
[LOW CONFIDENCE] — best guess based on limited information
This one is brilliant for research tasks. Instead of guessing whether the AI is making stuff up, you get an explicit signal. It's not perfect — the model can be wrong about its own confidence — but it's dramatically better than nothing.
How to Use Them
Drop all three into your system prompt or at the start of any conversation where accuracy matters. They stack well together.
I tested this combo across Claude Opus, GPT-5.4, and Gemini Pro. Claude handled it best (unsurprising, since they came from Anthropic's docs). GPT-5.4 responded well to Prompts 1 and 3 but sometimes ignored Prompt 2. Gemini was solid across the board.
Why It's Good
These prompts work because they change the model's *default behavior* around uncertainty. Without them, AI models default to confident-sounding output even when they're not confident. With them, the model has an explicit framework for communicating doubt. That's the difference between a tool you can trust and a tool that sounds trustworthy.
Pro Tip
Combine these with web search or retrieval-augmented generation (RAG) for maximum effect. The prompts reduce hallucinations in the model's own knowledge; web search fills the gaps with real-time data.
Source: reddit