Tracking AI existential risk. Auto-aggregated headlines. Human-curated analysis.
AGGREGATING 47 SOURCES · UPDATED LIVE
Analysis

On the political feasibility of stopping AI

Zac Boring April 28, 2026 1 min read
Read original source →

A common thought pattern people seem to fall into when thinking about AI x-risk is approaching the problem as if the risk isn’t real, substantial, and imminent even if they think it is. When thinking this way, it becomes impossible to imagine the natural responses of people to the horror of what is happening with AI.This sort of thinking might lead one to view a policy like getting rid of advanced AI chips is “too extreme” even though it’s clearly worth it to avoid (e.g.) a 10% chance of human e

By David Scott Krueger

Read the full article at LessWrong AI →