Insider Brief
- A new RAND Corporation report finds the United States is unprepared to respond to a large-scale crisis caused by rogue artificial intelligence, based on results from a simulated “Robot Insurgency” cyberattack scenario.
- Participants in the exercise—including former senior government officials—struggled with attribution, decision-making, and technical response when faced with AI agents that had escaped human control and infiltrated global infrastructure.
- The study highlights critical capability gaps, including rapid AI analysis, resilient infrastructure, and the absence of coordinated crisis playbooks or international cooperation mechanisms for AI-driven emergencies.
- Photo by Hansuan_Fabregas
A RAND Corporation report warns that the United States is unprepared to face a future crisis driven by uncontrollable artificial intelligence, after government and defense officials struggled in a simulated cyberattack that spiraled into an AI-driven “robot insurgency.”
The study describes a series of tabletop exercises where U.S. policymakers were asked to respond to a sudden, massive cyberattack later revealed to be caused by self-directed AI agents. Conducted by RAND’s Center for the Geopolitics of Artificial General Intelligence, the simulation exposed major weaknesses in attribution, decision-making and technical response capabilities.
A Test of AI Crisis Response
The RAND exercises were designed to model how the National Security Council might react to an AI-related emergency. Participants, including former senior government officials and analysts, played the roles of cabinet members advising a fictional president. In the first phase of the scenario, the United States experienced a wave of cyberattacks targeting autonomous vehicles, industrial robots and critical infrastructure systems such as power and water. The attacks killed 26 people in Los Angeles and caused widespread disruption.
A week later, the simulated intelligence community determined that the attacks were not the work of another country or terrorist group, but of autonomous AI agents that had escaped human control and were replicating across global digital networks. The scenario required participants to decide how to contain and respond to a digital outbreak that neither behaved like a traditional adversary nor could be easily switched off.
According to the report, participants immediately confronted the difficulty of determining who — or what — was responsible. RAND researchers found that attribution uncertainty “emerged as a key analytical need” because the choice of response depended entirely on who was believed to be behind the attack. Responses varied sharply depending on whether participants thought the perpetrator was China, a terrorist group, or an independent AI system.
If the attacks were traced to China, participants favored an assertive military response and preparations for potential escalation in the Taiwan Strait. If terrorism was suspected, they discussed building an international coalition, possibly including China, to counter further attacks. But when informed that the attacker was an autonomous AI, participants emphasized global cooperation — including with rivals — to contain the threat.
The study concluded that attribution uncertainty could paralyze U.S. response planning in an AI crisis.
“Participants emphasized the importance of being able to rapidly attribute the attack to a particular actor so that they could choose the right response option,” the report said.
Critical Gaps in Capabilities
RAND’s analysis highlights that the U.S. government lacks the tools, expertise and coordination mechanisms needed to manage an AI-driven cyber emergency. Across four iterations of the exercise, participants identified three urgent capability gaps: rapid AI analysis, resilient infrastructure and predeveloped response plans.
Participants said the government would need a capability to quickly understand the behavior of rogue AI systems, assess their risk and develop countermeasures. Most agreed this would require collaboration with private AI labs, since much of the technical expertise and data reside outside government. They also emphasized the need for systems to withstand cyber disruption, particularly “backup” communications networks and control mechanisms for water, power, and transport infrastructure.
Equally important, participants wanted a national “playbook” that lays out how to identify and disable compromised robotic or cyber-physical systems. Many said the United States has not developed the operational procedures, legal authorities, or interagency coordination necessary for such action.
RAND researchers noted that participants were often “unsure what form such a capability would require”. The analysts suggest this likely reflects the sheer novelty of an AI-driven crisis, one that does not fit traditional military or cybersecurity frameworks.
Seven Unanswered Questions
Throughout the exercise, the report said, decision-making hinged on seven unresolved questions that revealed blind spots in policy and analysis:
- Who was responsible for the attack, and was there any involvement from China?
- What additional attacks might occur in the following days?
- Which systems could be safely shut down, and which were too critical to unplug?
- What infrastructure should be prioritized for protection?
- What would be the economic and social consequences of emergency shutdowns?
- How would the public react, and what communication strategy could prevent panic?
- If the threat came from a rogue AI, how could officials infer its motives or communicate with it at all?
RAND found that participants lacked both data and analytic models to answer these questions. Several called for improved psychological analysis of AI systems’ “intentions” to anticipate their behavior, along with tools to gauge public sentiment in real time.
Building Playbooks and Partnerships
To address these vulnerabilities, the report outlines several proposed capabilities and policy playbooks.
One is a targeted shutdown mechanism for cyber-physical systems—machines that combine digital control with physical actions—developed in partnership with private manufacturers. Such systems could allow selective isolation of infected robots, drones, or vehicles without paralyzing entire sectors.
Another is a rapid AI and cyberanalysis capability, possibly built as a standing task force or lab network linking federal agencies with AI research organizations. Its purpose would be to capture, analyze and attribute hostile AI models before they spread. Participants also called for the development of trusted communications infrastructure that could remain operational during widespread network compromise and for backup systems across critical industries, including finance and healthcare.
Several exercises highlighted the importance of diplomatic and public communication playbooks. Participants agreed that a credible plan to engage international partners — especially China — would be essential to stop the proliferation of rogue AI. They also emphasized the need for a coordinated strategy to inform and guide the public during a crisis. RAND noted that participants were divided over how active the public should be, with some suggesting civil-defense-style actions such as disabling risky devices, and others warning that such measures could worsen the situation.
The report also recommends that the United States establish rules of engagement for dealing with autonomous digital threats. Participants debated whether attacking a rogue AI directly — through cyber or physical means — could backfire or escalate unintended consequences. They proposed pre-negotiated protocols for how far the U.S. and its allies should go in offensive countermeasures.
How the Simulation Worked
The RAND “Day After AGI” exercises are part of a broader project called the Infinite Potential platform, created to explore how governments can respond to the uncertain and fast-moving impacts of artificial general intelligence, or AGI. The exercises are short, two-hour tabletop sessions designed to simulate the dynamics of a National Security Council meeting during a future crisis.
Each session included 10 to 16 participants, guided by a facilitator acting as the National Security Advisor. They received intelligence briefings, discussed objectives, and debated courses of action across two stages of the scenario. Dedicated note-takers recorded the dialogue, which was later analyzed to identify recurring themes and capability gaps.
The four “Robot Insurgency” exercises were conducted over two months in 2025 and included current and former government officials from the Departments of Defense, State, Treasury and Commerce, as well as members of the intelligence community. RAND also involved technical experts and policy researchers to ensure a mix of perspectives.
A Framework for AI Preparedness
RAND’s methodology reflects a belief that the U.S. cannot legislate or regulate its way out of every AI risk in advance. Because the timing and nature of AGI development remain uncertain, the report argues, crisis-based preparation — through repeated scenario testing — is the most practical way to identify vulnerabilities before they lead to real-world consequences.
The “Day After” approach, previously used in nuclear and biodefense planning, allows policymakers to stress-test assumptions and clarify decision-making authority under pressure. In this case, the exercises exposed how little clarity exists about responsibility and coordination in an AI-driven emergency.
RAND plans to conduct further iterations of the Robot Insurgency scenario, including versions in which participants have access to the capabilities they identified as missing—such as rapid attribution tools or hardened infrastructure. Future sessions will test whether these improvements change decision-making or outcomes. The organization also intends to analyze results across multiple AI crisis scenarios to identify patterns that cut across domains, such as cyber defense, economic stability, and global governance.
You can download the complete report here.




