Product Alignment is not Superintelligence Alignment (and we need the latter to survive)
tl;dr: progress on making Claude friendly[1] is not the same as progress on making it safe to build godlike superintelligence. solving the former does not imply we get a good future.[2] please track the difference.The term 'Alignment' was coined[3] to point to the technical problem of understanding how to build minds such that if they were to become strongly and generally superhuman, things would go well.It has been increasingly adopted by frontier AI labs and much of the rest of the AI safety c
By plex