Tracking AI existential risk. Auto-aggregated headlines. Human-curated analysis.
AGGREGATING 47 SOURCES · UPDATED LIVE
Research

You can only build safe ASI if ASI is globally banned

Zac Boring April 16, 2026 1 min read
Read original source →

Sometimes people make various suggestions that we should simply build "safe" artificial Superintelligence (ASI), rather than the presumably "unsafe" kind.[1]There are various flavors of “safe” people suggest.Sometimes they suggest building “aligned” ASI: You have a full agentic autonomous god-like ASI running around, but it really really loves you and definitely will do the right thing.Sometimes they suggest we should simply build “tool AI” or “non-agentic” AI.Sometimes they have even more exoti

By Connor Leahy

Read the full article at Alignment Forum →