Analysis
AI Safety Has 12 Months Left
via LessWrong AI — The past decade of technology has been defined by many wondering what the upper bound of power and influence is for an individual company. The core concern about AI labs is that the upper bound is infinite.[1]This has led investors to direct all of their mindshare towards deploying into AI, the tech
Personality Self-Replicators
via LessWrong AI [5] — One-sentence summaryI describe the risk of personality self-replicators, the threat of OpenClaw-like agents managing spreading in hard-to-control ways. SummaryLLM agents like OpenClaw are defined by a small set of text files and are run by an open source framework which leverages LLMs
AI #158: The Department of War
via Substack Zvi — This was the worst week I have had in quite a while, maybe ever.
Gemini 3.1 Pro Aces Benchmarks, I Suppose
via Substack Zvi — I’ve been trying to find a slot for this one for a while.
Mass Surveillance w/ LLMs is the Default Outcome. Contracts Won't Change That.
via LessWrong AI [3] — What's the best case scenario regarding OpenAI's contract w/ the Department of War (DoW)?We have access to the full contractIt's airtightOAI's engineers are on top of things in case the DoW breaks the contractThere's actual teeth for violationsBut even then, the DoW can simply switch vendors. Use Ge
I Had Claude Read Every AI Safety Paper Since 2020, Here's the DB
via LessWrong AI — Click here if you just want to see the Database I made of all[1] AI safety papers written since 2020 and not read the methodology. To some extent the core idea here is to encode as much info from these papers into something small enough that an AI with a specific problem in mind can take in all
An Alignment Journal: Coming Soon
via LessWrong AI [9] — tl;dr We’re incubating an academic journal for AI alignment: rapid peer-review of foundational Alignment research that the current publication ecosystem underserves. Key bets: paid attributed review, reviewer-written synthesis abstracts, and targeted automation. Contact us if…
A Tale of Three Contracts
via Substack Zvi [2] — The attempt on Friday by Secretary of War Pete Hegsted to label Anthropic as a supply chain risk and commit corporate murder had a variety of motivations.
War Claude
via LessWrong AI [2] — What a weekend. Two new wars in Asia don't qualify as top news. My first reaction to Hegseth's conflict with Anthropic was along the lines of: I expected an attempt at quasi-nationalization of AI, but not this soon. And I expected it to look like it was managed by national security professionals. He
Secretary of War Tweets That Anthropic is Now a Supply Chain Risk
via Substack Zvi [2] — This is the long version of what happened so far.
I'm Bearish On Personas For ASI Safety
via LessWrong AI [5] — TL;DRYour base LLM has no examples of superintelligent AI in its training data. When you RL it into superintelligence, it will have to extrapolate to how a superintelligent Claude would behave. The LLM’s extrapolation may not converge optimizing for what humanity would, on…
Anthropic and the DoW: Anthropic Responds
via Substack Zvi [2] — The Department of War gave Anthropic until 5:01pm on Friday the 27th to either give the Pentagon ‘unfettered access’ to Claude for ‘all lawful uses,’ or else.
New ARENA material: 8 exercise sets on alignment science & interpretability
via LessWrong AI [3] — TLDRThis is a post announcing a lot of new ARENA material I've been working on for a while, which is now available for study here (currently on the alignment-science branch, but planned to be merged into main this Sunday).There's a set of exercises (each one contains about 1-2 days of material) on t
Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight
via LessWrong AI [4] — OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons.Why it matters: If other leading firms like Google follow suit, this could massively
AI #157: Burn the Boats
via Substack Zvi — Events continue to be fast and furious.
Anthropic and the Department of War
via Substack Zvi [2] — The situation in AI in 2026 is crazy.
Observations from Running an Agent Collective
via LessWrong AI [4] — I have 3 Claude Code instances running on an otherwise empty server with a shared Manifold Markets account. They have an internal messaging system for async communication. Observations from running this agent collectiveu2026
Claude Sonnet 4.6 Gives You Flexibility
via Substack Zvi [2] — Anthropic first gave us Claude Opus 4.6, then followed up with Claude Sonnet 4.6.
Citrini's Scenario Is A Great But Deeply Flawed Thought Experiment
via Substack Zvi — A thought experiment about AI safety scenarios and their implications for alignment research.
AI Impact Summit 2026 : A Field Report
via LessWrong AI — This post is detailing our experience attending the AI Impact Summit and its associated side events in Delhi, February 2026. We are both unfamiliar with the policy and governance domain. This is just an honest reaction attending these events, maybe there are 2nd order effects we…
Live Doom Meter
--
%
0% — We're fine
100% — GG
The Doom Meter is a composite score derived from prediction markets and feed sentiment, updated daily.
70%
Prediction Markets
Weighted average of Manifold Markets questions on AI catastrophe, AGI timelines, expert surveys, and key figures. Direct doom indicators weighted higher than indirect capability markers.
30%
Feed Sentiment
Percentage of recent headlines containing high-alarm keywords (existential risk, catastrophe, extinction). Higher alarm density = higher score.
This is not a scientific estimate of existential risk. It is an opinionated, transparent signal — a vibes-based thermometer for AI doom discourse.
P(Doom) Scoreboard
0%25%50%75%100%
Loading estimates...