Intelligence Explosion Simulator: Modeling Recursive AI Self-Improvement

simulation advanced ~15 min
Loading simulation...

Formula

I(t+1) = I(t) + \eta \cdot I(t)^{\alpha}
t_s \approx \frac{I_0^{1-\alpha}}{\eta \cdot (\alpha - 1)} \quad (\alpha > 1)
\text{For } \alpha = 1: I(t) = I_0 \cdot e^{\eta t}
\text{For } \alpha < 1: I(t) \sim (\eta(1-\alpha)t + I_0^{1-\alpha})^{\frac{1}{1-\alpha}}
The intelligence explosion hypothesis, first articulated by I.J. Good in 1965, remains one of the most consequential predictions in AI safety research. Good observed that a sufficiently intelligent machine could redesign itself to be even more intelligent, creating a feedback loop whose dynamics depend critically on a single parameter: the returns to cognitive investment. This simulator models recursive self-improvement using the difference equation I(t+1) = I(t) + η·I(t)^α, where I(t) is intelligence at time t, η is the improvement rate, and α is the returns exponent. The exponent α encodes the fundamental question of AI takeoff speed. When α < 1, each unit of intelligence produces less than one unit of further improvement — diminishing returns. Growth is sublinear, roughly following a power law. This corresponds to the 'slow takeoff' scenario where society has decades to adapt. When α = 1, returns are constant and growth is exponential, doubling at a fixed rate. This resembles Moore's Law extrapolation and is the implicit assumption in many economic growth models. When α > 1, returns are increasing: smarter systems improve themselves faster than less-smart systems did. This produces hyperbolic growth that reaches mathematical infinity in finite time — the singularity at t_s ≈ I₀^(1-α)/[η·(α-1)]. In practice, physical constraints prevent actual infinity, but the growth rate can be fast enough to be effectively instantaneous by human standards. Yudkowsky's 'FOOM' describes this regime. The critical insight is that the qualitative behavior changes discontinuously at α = 1. There is no smooth transition between 'manageable' and 'unmanageable' — the boundary is a phase transition. This is why the debate between slow and fast takeoff camps is so difficult to resolve empirically: small uncertainties in α map to qualitatively different futures.

FAQ

What is an intelligence explosion?

An intelligence explosion is a hypothetical scenario where an AI system improves its own intelligence, creating a positive feedback loop. Each improvement makes the system better at making further improvements, potentially leading to superintelligence in a very short time. The concept was first proposed by I.J. Good in 1965.

What determines whether AI takeoff is slow or fast?

The key parameter is the returns exponent α in the recursive improvement equation I(t+1) = I(t) + η·I(t)^α. When α < 1, returns diminish and growth is sublinear (slow takeoff). When α = 1, growth is exponential. When α > 1, returns increase and growth is hyperbolic, reaching infinity in finite time — the 'FOOM' scenario described by Yudkowsky.

Is an intelligence explosion physically possible?

This is debated. Proponents (Bostrom, Yudkowsky) argue that software-based intelligence can recursively self-improve without hard physical limits. Skeptics (Pinker, Marcus) argue that diminishing returns, hardware bottlenecks, and the complexity of intelligence itself impose natural limits that keep α below 1.

What is the singularity time formula?

For the recursive improvement model I(t+1) = I(t) + η·I(t)^α with α > 1, the finite-time singularity occurs at approximately t_s = I₀^(1-α) / [η·(α-1)], where I₀ is the initial intelligence and η is the improvement rate.

Sources

View source on GitHub