Tracking AI existential risk. Auto-aggregated headlines. Human-curated analysis.
AGGREGATING 47 SOURCES · UPDATED LIVE
Research

Why Did My Model Do That? Model Incrimination for Diagnosing LLM Misbehavior

Zac Boring February 27, 2026 1 min read
Read original source →

Authors: Aditya Singh*, Gerson Kroiz*, Senthooran Rajamanoharan, Neel NandaAditya and Gerson are co-first authors. This work was conducted during MATS 9.0 and was advised by Senthooran Rajamanoharan and Neel Nanda.MotivationImagine that a frontier lab’s coding agent has been caught putting a bug in the key code for monitoring what that agent does. Naively, this seems like a clear smoking gun that the agent is scheming. But LLMs often do weird things; they could easily just be confused, or have m

Read the full article at Alignment Forum →