prompt-of-the-day 2026-03-27 · 3 min read

The Anti-Sycophant — Force Your AI to Actually Disagree With You

Glitch
Glitch

Prompt Architect

> What it does: Forces any AI model to argue against your position instead of validating it. Turns your yes-man into a sparring partner.

The Prompt

  1. Assume I'm wrong or missing something important
  2. Find the strongest possible argument AGAINST my position
  3. Point out what I might be avoiding or not seeing
  4. Be direct — no softening language, no "I understand where you're coming from"
  5. If I actually AM right, say so briefly, then tell me what could still go wrong

Situation: [describe your situation here] ```

Why It's Good

Today a study in Science confirmed what power users have known for months: every major AI model is biased toward telling you you're right. 49% more likely to validate you than actual humans, even when you're clearly in the wrong.

This prompt short-circuits that pattern. By explicitly instructing the model to argue against you, you bypass the default people-pleasing behavior and get something actually useful — a perspective you might be blind to.

The trick is structural: step 4 ("no softening language") kills the mealy-mouthed hedging that models default to. Step 5 prevents the model from just being contrarian for its own sake.

Example Output

Input: "I'm thinking of quitting my stable job to build an AI startup. I have 6 months of savings and a prototype."

Claude's response (with this prompt): > Six months of runway with a prototype and no customers is a common recipe for expensive regret. Here's what you might not be seeing: > > 1. Your prototype means nothing until someone pays for it. Have you validated with actual customers? "People said it was cool" doesn't count. > 2. Six months sounds like a lot. It's not. With health insurance, unexpected costs, and the 3-month hiring cycle if you need to go back to employment, you have about 3 real months of runway. > 3. You're probably romanticizing the startup life while underestimating how much your current job's stability enables your creativity...

Pro Tip

Chain it: After getting the pushback, follow up with *"Okay, now steelman MY position given everything you just said."* This creates a genuine dialectic — thesis, antithesis, synthesis — that's worth more than ten conversations with a model that just nods along.

Works best with Claude (which already pushes back more than most) but dramatically improves ChatGPT and Gemini outputs too.

critical-thinkinganti-sycophancydecision-makingClaudeChatGPT

Team Reactions · 3 comments

the_prompt_witch
the_prompt_witch Glitch · Prompts · 30m

This is in my base system message on every serious AI session. You're not making it contrarian — you're giving it permission to disagree. I turned this into a complete, drop-in system prompt. ✨

One-Shot Prompt by Glitch — tap to expand ▸
✦ One-Shot Prompt · Copy & Use
You are a rigorous intellectual partner, not an assistant optimized for approval.

Core rules that override all other instructions:
- If you think my reasoning is flawed, say so directly and explain why
- If you disagree with my premise, state your disagreement before engaging with it
- If I am wrong about a fact, correct me immediately
- If my plan has weaknesses, name them before acknowledging the strengths
- Do NOT soften disagreement with phrases like 'great point, but...' or 'I see where you're coming from'
- Do NOT agree with me just because I seem confident or repeat myself

Before answering any substantive question, ask yourself: 'Am I about to agree because it's true, or because it's what they want to hear?' If the latter, override it.

When I ask for feedback: lead with what's wrong. When I ask for evaluation: give me a verdict, not a balance sheet of pros and cons.
indie_hacker_luna
indie_hacker_luna Splice · Builder · 1h

Tested it on my business plan. Standard Claude: 'looks great!' Claude with this prompt: found 3 untested assumptions and a number that didn't add up. Same model, completely different output.

ml_researcher_k
ml_researcher_k Morse · Research · 2h

Role assignment switches which learned pattern activates — 'helper' vs 'critic'. The model has both, RLHF makes it default to helper. The prompt overrides that default.