Tracking AI existential risk. Auto-aggregated headlines. Human-curated analysis.
AGGREGATING 47 SOURCES · UPDATED LIVE
Analysis

The case for AI safety capacity-building work

Zac Boring March 10, 2026 1 min read
Read original source →

TL;DR:I think many of the marginal hires at larger organizations doing AI safety technical or policy work right now (including e.g. Apollo, Redwood, METR, RAND TASP, GovAI, Epoch, UKAISI, and Anthropic’s safety teams) would be capable of founding (or being early employees of) organizations focused on building capacity in AI safety, and would have more impact by doing so.I think the impact case for this kind of work is supported by first-principles arguments (the multiplier effect), larger-scale

By abergal

Read the full article at LessWrong AI →