You can only build safe ASI if ASI is globally banned
Sometimes people make various suggestions that we should simply build "safe" artificial Superintelligence (ASI), rather than the presumably "unsafe" kind.[1]There are various flavors of “safe” people suggest.Sometimes they suggest building “aligned” ASI: You have a full agentic autonomous god-like ASI running around, but it really really loves you and definitely will do the right thing.Sometimes they suggest we should simply build “tool AI” or “non-agentic” AI.Sometimes they have even more exoti
By Connor Leahy