The case for AI safety capacity-building work
TL;DR:I think many of the marginal hires at larger organizations doing AI safety technical or policy work right now (including e.g. Apollo, Redwood, METR, RAND TASP, GovAI, Epoch, UKAISI, and Anthropic’s safety teams) would be capable of founding (or being early employees of) organizations focused on building capacity in AI safety, and would have more impact by doing so.I think the impact case for this kind of work is supported by first-principles arguments (the multiplier effect), larger-scale
By abergal