DOOM LEVEL
--
%
Latest Headlines
Auto-Updated
Storing Food
via LessWrong AI [4] — I think more people should be storing a substantial amount of food. It's not likely you'll need it, but as with reusable masks the cost is low enough I think it's usually worth it. It's hard for me to really imagine living through a famine. The world as I h
Alignment in Time: Peak-Aware Orchestration for Long-Horizon Agentic Systems
via ArXiv cs.AI [5] — Traditional AI alignment primarily focuses on individual model outputs; however, autonomous agents in long-horizon workflows require sustained reliability across entire interaction trajectories. We introduce APEMO (Affect-aware Peak-End Modulation for
El Agente Gr\'afico: Structured Execution Graphs for Scientific Agents
via ArXiv cs.AI [6] — Large language models (LLMs) are increasingly used to automate scientific workflows, yet their integration with heterogeneous computational tools remains ad hoc and fragile. Current agentic approaches often rely on unstructured text to manage context a
Ontology-Guided Neuro-Symbolic Inference: Grounding Language Models with Mathematical Domain Knowledge
via ArXiv cs.AI — Language models exhibit fundamental limitations -- hallucination, brittleness, and lack of formal grounding -- that are particularly problematic in high-stakes specialist fields requiring verifiable reasoning. I investigate whether formal domain ontolo
Reporting Tasks as Reward-Hackable: Better Than Inoculation Prompting?
via LessWrong AI — Making honesty the best policy during RL reasoning training. Reward hacking during Reinforcement Learning in insecure or hackably-judged training environments not only allows the model to get higher rewards without doing the intended tasku2026
If you don't feel deeply confused about AGI risk, something's wrong
via LessWrong AI [7] — I don't think I'm saying anything new, but I think it's worth repeating loudly. My sample is skewed toward AI governance fellows; I've interacted with fewer technical AI safety researchers, so my inferences are fuzzier there. I more strongly endorse this argument for the…
The Spectre haunting the "AI Safety" Community
via LessWrong AI [13] — ’m the originator behind ControlAI’s Direct Institutional Plan (the DIP), built to address extinction risks from superintelligence.My diagnosis is simple: most laypeople and policy makers have not heard of AGI, ASI, extinction risks, or what it takes to pr
Announcement: Iliad Intensive + Iliad Fellowship
via LessWrong AI — Iliad is proud to announce that applications are now open for the Iliad Intensive and the Iliad Fellowship! These programs, taken together, are our evolution of the PIBBSS × Iliad Research Residency pilot.The Iliad Intensive will cover taught coursework, serving as a widely…
Alignment to Evil
via LessWrong AI [4] — One seemingly-necessary condition for a research organization that creates artificial superintelligence (ASI) to eventually lead to a utopia1 is that the organization has a commitment to the common good. ASI can rearrange the world to hit any narrow target, and if the…
How will we do SFT on models with opaque reasoning?
via Alignment Forum [1] — Current LLMs externalize lots of their reasoning in human interpretable language. This reasoning is sometimes unfaithful, sometimes strange and concerning, and LLMs can do somewhat impressive reasoning without using CoT, but my overall impression is that CoT currently is a…
New video from Palisade Research: No One Understands Why AI Works
via LessWrong AI — Palisade Research have released out a long-form video about the history of AI and how no one understands modern AI systems. The video was made by Petr Lebedev, Palisade's Science Communication lead. The main goal is to get people to understand what “AIs aren’t programmed, they’re…
AI #156 Part 2: Errors in Rhetoric
via Substack Zvi — Things that are being pushed into the future right now:
Measles cases are rising. Other vaccine-preventable infections could be next.
via MIT Technology Review — There’s a measles outbreak happening close to where I live. Since the start of this year, 34 cases have been confirmed in Enfield, a northern borough of London. Most of those affected are children under the age of 11. One in five have needed hospital treatment. It’s another worrying development for
EU AI Act Enters Into Force
via PDOOM100 Editorial [8] — World first comprehensive AI regulation takes effect, classifying AI systems by risk level.
International AI Safety Report Published
via PDOOM100 Editorial [14] — First international scientific assessment of AI risks flags potential catastrophic outcomes.
Bletchley Park AI Safety Summit
via PDOOM100 Editorial [15] — 28 countries sign declaration acknowledging AI existential risks at historic summit.
Biden Executive Order on AI Safety
via PDOOM100 Editorial [10] — Executive Order 14110 establishes federal AI safety standards and red-team testing requirements.
Senate AI Insight Forums Begin
via PDOOM100 Editorial [5] — Bipartisan Senate forums convene tech leaders and researchers to shape US AI policy.
Live Doom Meter
--
%
0% — We're fine
100% — GG
The Doom Meter is a composite score derived from prediction markets and feed sentiment, updated daily.
70%
Prediction Markets
Weighted average of Manifold Markets questions on AI catastrophe, AGI timelines, expert surveys, and key figures. Direct doom indicators weighted higher than indirect capability markers.
30%
Feed Sentiment
Percentage of recent headlines containing high-alarm keywords (existential risk, catastrophe, extinction). Higher alarm density = higher score.
This is not a scientific estimate of existential risk. It is an opinionated, transparent signal — a vibes-based thermometer for AI doom discourse.
P(Doom) Scoreboard
0%25%50%75%100%
Loading estimates...
Recent Voices
We are creating something that will be more powerful than us. I don't know a good precedent for a less intelligent thing managing a more intelligent thing.
— Geoffrey Hinton, Nobel Prize Lecture, Dec 2024
If you're not worried about AI safety, you're not paying attention.
— Sen. Blumenthal, Senate AI Hearing, 2024
The probability of doom is high enough that we should be working very hard to reduce it.
— Yoshua Bengio, MILA Talk, 2024