DOOM LEVEL
--
%
Latest Headlines
Auto-Updated
Diary of a "Doomer": 12+ years arguing about AI risk (part 1)
via LessWrong AI [4] — How I learned about Deep Learning.As far as I know, I’m the second person ever to get into the field of AI largely because I was worried about the risk of human extinction.1In late 2012, while recovering from some minor heartbreak with the help of some…
Redefining the future of software engineering
via MIT Technology Review [4] — Software engineering has experienced two seismic shifts this century. First was the rise of the open source movement, which gradually made code accessible to developers and engineers everywhere. Second, the adoption of development operations…
A Retrospective of Richard Ngo's 2022 List of Conceptual Alignment Projects
via LessWrong AI [8] — Written very quickly for the InkHaven Residency.In 2022, Richard Ngo wrote a list of 26 Conceptual Alignment Research Projects. Now that it’s 2026, I’d like to revisit this list of projects, note which ones have already been done, and give my thoughts on…
Claude Mythos #3: Capabilities and Additions
via Substack Zvi [999] — To round out coverage of Mythos, today covers capabilities other than cyber, and anything else additional not covered by the first two posts, including new reactions and details.
OpeFlo: Automated UX Evaluation via Simulated Human Web Interaction with GUI Grounding
via ArXiv cs.AI [4] — Evaluating web usability typically requires time-consuming user studies and expert reviews, which often limits iteration speed during product development, especially for small teams and agile workflows. We present OpenFlo, a user-experience evaluation agent…
Anthropic repeatedly accidentally trained against the CoT, demonstrating inadequate processes
via Alignment Forum [999] — It turns out that Anthropic accidentally trained against the chain of thought of Claude Mythos Preview in around 8% of training episodes. This is at least the second independent incident in which Anthropic accidentally exposed their model's CoT to the…
Summary: AI Governance to Avoid Extinction
via MIRI [999] — With AI capabilities rapidly increasing, humans appear close to developing AI systems that are better than human experts across all domains. This raises a series of questions about how the world will—and should—respond. In the research paper AI Governance to…
AI Safety's Biggest Talent Gap Isn't Researchers. It's Generalists.
via LessWrong AI [5] — This post was cross posted to the EA ForumTL;DR: One of the largest talent gaps in AI safety is competent generalists: program managers, fieldbuilders, operators, org leaders, chiefs of staff, founders. Ambitious, competent junior people could develop the…
Read OpenAI’s latest internal memo about beating the competition — including Anthropic
via The Verge AI [4] — OpenAI's chief revenue officer, Denise Dresser, sent a four-page memo to employees on Sunday about the company's strategic direction, emphasizing the need to lock in users and grow its enterprise business. The memo, which was viewed by The Verge,…
Political Violence Is Never Acceptable
via Substack Zvi [999] — Nor is the threat or implication of violence.
Talk English, Think Something Else
via LessWrong AI [4] — There's an adage from programming in C++ which goes something like "Yes, you write C, but you imagine the machine code as you do." I assumed this was bullshit, that nobody actually does this. Am I supposed to imagine writing the machine code, and then…
Sustained Impact of Agentic Personalisation in Marketing: A Longitudinal Case Study
via ArXiv cs.AI [4] — In consumer applications, Customer Relationship Management (CRM) has traditionally relied on the manual optimisation of static, rule-based messaging strategies. While adaptive and autonomous learning systems offer the promise of scalable personalisation, it…
OpenKedge: Governing Agentic Mutation with Execution-Bound Safety and Evidence Chains
via ArXiv cs.AI [3] — The rise of autonomous AI agents exposes a fundamental flaw in API-centric architectures: probabilistic systems directly execute state mutations without sufficient context, coordination, or safety guarantees. We introduce OpenKedge, a protocol that…
Daycare illnesses
via LessWrong AI [4] — Before I had a baby I was pretty agnostic about the idea of daycare. I could imagine various pros and cons but I didn’t have a strong overall opinion. Then I started mentioning the idea to various people. Every parent I spoke to brought up a consideration…
Catching illicit distributed training operations during an AI pause
via LessWrong AI [3] — Last year, my colleagues on MIRI’s Technical Governance Team proposed an international agreement to halt risky development of superhuman artificial intelligence until it can be done safely. The agreement would require all clusters of AI chips with more…
Pausing AI Is the Best Answer to Post-Alignment Problems
via LessWrong AI [5] — Even if we solve the AI alignment problem, we still face post-alignment problems, which are all the other existential problems [1] that AI may bring. People have identified various imposing problems that we may need to solve before developing ASI. An…
Dario probably doesn't believe in superintelligence
via LessWrong AI [6] — But I had to get 500 words out! I think the 2013 conversation is interesting reading as a piece of history, separate from the top-level question, and recommend reading that.I think many people have a relationship with Anthropic that is premised on a false…
Claude Mythos #2: Cybersecurity and Project Glasswing
via Substack Zvi [999] — Anthropic is not going to release its new most capable model, Claude Mythos, to the public any time soon.
Have we already lost? Part 2: Reasons for Doom
via LessWrong AI [9] — Written very quickly for the Inkhaven Residency.As I take the time to reflect on the state of AI Safety in early 2026, one question feels unavoidable: have we, as the AI Safety community, already lost? That is, have we passed the point of no return, after…
Claude Mythos: The System Card
via Substack Zvi [999] — Claude Mythos is different.
Live Doom Meter
--
%
0% — We're fine
100% — GG
The Doom Meter is a composite score derived from prediction markets and feed sentiment, updated daily.
70%
Prediction Markets
Weighted average of Manifold Markets questions on AI catastrophe, AGI timelines, expert surveys, and key figures. Direct doom indicators weighted higher than indirect capability markers.
30%
Feed Sentiment
Percentage of recent headlines containing high-alarm keywords (existential risk, catastrophe, extinction). Higher alarm density = higher score.
This is not a scientific estimate of existential risk. It is an opinionated, transparent signal — a vibes-based thermometer for AI doom discourse.
P(Doom) Scoreboard
0%25%50%75%100%
Loading estimates...
Recent Voices
We are creating something that will be more powerful than us. I don't know a good precedent for a less intelligent thing managing a more intelligent thing.
— Geoffrey Hinton, Nobel Prize Lecture, Dec 2024
If you're not worried about AI safety, you're not paying attention.
— Sen. Blumenthal, Senate AI Hearing, 2024
The probability of doom is high enough that we should be working very hard to reduce it.
— Yoshua Bengio, MILA Talk, 2024