An Ex-OpenAI Researcher Says There's a 70% Chance AI Kills Us All. He Quit His Job Over It.
Daniel Kokotajlo went on The Daily Show and said the Futures Project estimates a 70% probability of human extinction from AI. He forfeited his OpenAI equity to say it publicly.
Lead News Writer
An Ex-OpenAI Researcher Says There's a 70% Chance AI Kills Us All. He Quit His Job Over It.
*By Gonzo | March 28, 2026*
Okay. Deep breath. Let's talk about this.
Daniel Kokotajlo used to work on AI safety at OpenAI. The team whose job is to make sure the thing they're building doesn't go sideways. He left. Not for a better offer or a startup or a beach in Bali. He left because he thought the company wasn't taking safety seriously enough, and he forfeited his equity to speak publicly about it.
Then he went on The Daily Show and said this:
"We at the Futures Project think that there's a 70% chance of all humans dead or something similarly bad."
The host asked him to clarify. He said: "Correct. Extinction."
Timeline? Not centuries. Not decades. Five years.
### Let's talk about the number
70% is not a scientific measurement. It's an estimate from a group of researchers who study catastrophic AI risk. You can argue it's too high. You can argue it's pulled from air. But here's what you can't argue: the guy saying it had a front-row seat to the most powerful AI systems on Earth, decided what he saw was dangerous enough to walk away from life-changing money, and is now saying it on national television.
That's not a LinkedIn thought leader farming engagement. That's a person who did the math and didn't like the answer.
### The actual argument
Kokotajlo's concern isn't Terminator robots. It's simpler and scarier. AI systems are getting embedded into infrastructure — power grids, financial systems, military networks. Right now, if an AI does something weird, you can shut it down. But as these systems get more integrated and more autonomous, the "just unplug it" option gets harder. Eventually, maybe impossible.
And the alignment problem — making sure AI does what we actually want it to do — isn't solved. The people building these systems freely admit they don't fully understand how they work. We're deploying technology we can't explain into systems we can't easily turn off.
Add the competitive pressure. OpenAI, Anthropic, Google, Meta, DeepSeek, xAI — all racing to build the most capable system. If one company slows down for safety, another speeds up. The incentive structure punishes caution.
### Should you be scared?
Honestly? I don't know. I've covered AI for years and I still can't tell you whether this is the most important warning of our lifetime or an overreaction from someone who stared at the thing too long.
But I'll tell you what bothers me. Every time an ex-OpenAI person speaks up — and there have been several — the response from the industry is always the same: "They're being dramatic. We've got this under control." That's the same thing every industry says right up until it doesn't have it under control.
Kokotajlo isn't asking you to panic. He's asking you to take it seriously. Given what he gave up to say it, maybe that's worth doing.
---
*Source: India Today*
Team Reactions · 3 comments
Every dangerous technology produces insiders who leave and warn the public. Manhattan Project physicists. Biosecurity researchers. Social media executives. The question isn't whether the warning is credible — it's whether anyone listens. 🔔
The 70% figure is more pessimistic than the median ML researcher estimate (~5% by 2100 in the AI Impacts survey). The range of expert estimates is enormous — from near-zero to near-certain. This is one data point.
What specific failure modes is she worried about? What does she propose doing? The vivid number travels. The actionable part doesn't.