The development of advanced AI is not occurring in a vacuum. Multiple actors — major technology companies, national AI programs, and well-funded startups — are racing to build increasingly capable systems. This competitive dynamic fundamentally shapes the risk landscape, transforming what might be a manageable technical challenge into a dangerous coordination problem.
Armstrong, Bostrom, and Shulman (2016) formalized this as a 'race to the precipice' model. Each actor faces a choice: invest in safety (reducing individual risk but slowing development) or cut corners (increasing risk but accelerating progress). In competitive equilibrium, actors systematically under-invest in safety because they bear only a fraction of the total risk while capturing all of the benefit from being first.
This simulator models the dynamics using a multi-actor risk model. Each actor i has an individual catastrophic risk r_i = base_risk * (1-safety_i) * speed_i * (1-coordination). The total catastrophe probability follows the independence formula: P(catastrophe) = 1 - Product(1-r_i). This compound probability is the key insight — it means that system-wide risk scales super-linearly with the number of actors.
Coordination enters the model as a multiplicative risk reducer. Unlike safety investment (which each actor pays for individually), coordination reduces risk for all actors simultaneously. This makes it by far the highest-leverage intervention: a modest increase in coordination level produces larger risk reductions than equivalent increases in individual safety investment.
The race track visualization makes the dynamics intuitive. Actors progress toward a finish line at speeds proportional to race pressure. Their colors shift from green (safe) to red (dangerous) as individual risk increases. Coordination is shown as lines connecting actors — more coordination means more lines and more shared information. The thermometer on the right integrates all individual risks into a single system-wide probability.
The model reveals several non-obvious results. First, adding actors always increases total risk, even if the new actors are cautious. Second, reducing race pressure is often more effective than increasing safety investment, because pressure acts multiplicatively on risk while safety acts linearly. Third, there is a critical coordination threshold below which individual safety investment is essentially futile — the system is in a 'race to the bottom' regime where collective dynamics overwhelm individual prudence.
These findings have direct policy implications. Effective AI governance must operate at the coordination level (international standards, safety treaties, information sharing) rather than relying solely on individual actors' voluntary safety investments. The gap between the Nash equilibrium (what actors choose individually) and the social optimum (what minimizes collective risk) is the fundamental challenge of AI governance.
FAQ
What is the AI race problem?
The AI race problem is a collective action failure where multiple actors (companies, nations) compete to develop advanced AI first. This competition creates pressure to cut safety corners for speed, increasing the probability of catastrophic outcomes. It is structurally similar to an arms race: each actor's rational decision to accelerate creates a collectively irrational outcome.
How does the number of actors affect AI risk?
Total catastrophe probability follows the formula P = 1 - Product(1 - risk_i). With independent actors, this means risk compounds: 5 actors each with 5% individual risk produce a combined risk of 1 - (0.95)^5 = 22.6%, not 25%. More actors always increase total risk, even if each individual actor is relatively safe.
What role does international coordination play in AI safety?
Coordination reduces risk multiplicatively across all actors — it effectively reduces each actor's base risk by the coordination factor. This makes coordination the highest-leverage intervention: a 10% increase in coordination affects all actors simultaneously, while a 10% increase in one actor's safety investment only affects that actor. Real-world coordination mechanisms include safety standards (ISO), treaty frameworks, information-sharing agreements, and joint auditing protocols.
What is the optimal level of safety investment in an AI race?
The optimal safety investment depends on race pressure, coordination level, and number of actors. In general, the social optimum requires higher safety investment than any individual actor would choose voluntarily, because each actor bears only a fraction of the total risk. Regulatory frameworks can close this gap by mandating minimum safety standards.