A coalition of leading AI researchers from OpenAI, Google DeepMind, Anthropic, and other major organizations has released a position paper urging deeper research into Chain-of-Thought (CoT) monitoring — a technique that could become critical for understanding and controlling next-generation AI reasoning models.
The paper highlights CoT as a core component of frontier models like OpenAI’s o3 and DeepSeek’s R1, which externalize reasoning processes in a way similar to human problem-solving. As these models power increasingly autonomous AI agents, researchers argue that CoT monitoring offers a rare window into their internal decision-making.
Signatories include industry figures such as Mark Chen (OpenAI), Ilya Sutskever (Safe Superintelligence), Geoffrey Hinton, Shane Legg (DeepMind), and John Schulman (Thinking Machines), with contributions from institutions including the UK AI Safety Institute, Apollo Research, and METR.
The authors warn that CoT visibility may be fragile and must be preserved through collaborative research before advances in model architecture render it obsolete. As AI safety becomes a global concern, this paper represents a rare unified call from competing labs to prioritize transparency in how AI models reason, make decisions, and align with human values.




