Intelligence Explosion Simulator: Modeling Recursive AI Self-Improvement
Formula
I(t+1) = I(t) + \eta \cdot I(t)^{\alpha}t_s \approx \frac{I_0^{1-\alpha}}{\eta \cdot (\alpha - 1)} \quad (\alpha > 1)\text{For } \alpha = 1: I(t) = I_0 \cdot e^{\eta t}\text{For } \alpha < 1: I(t) \sim (\eta(1-\alpha)t + I_0^{1-\alpha})^{\frac{1}{1-\alpha}} FAQ
What is an intelligence explosion?
An intelligence explosion is a hypothetical scenario where an AI system improves its own intelligence, creating a positive feedback loop. Each improvement makes the system better at making further improvements, potentially leading to superintelligence in a very short time. The concept was first proposed by I.J. Good in 1965.
What determines whether AI takeoff is slow or fast?
The key parameter is the returns exponent α in the recursive improvement equation I(t+1) = I(t) + η·I(t)^α. When α < 1, returns diminish and growth is sublinear (slow takeoff). When α = 1, growth is exponential. When α > 1, returns increase and growth is hyperbolic, reaching infinity in finite time — the 'FOOM' scenario described by Yudkowsky.
Is an intelligence explosion physically possible?
This is debated. Proponents (Bostrom, Yudkowsky) argue that software-based intelligence can recursively self-improve without hard physical limits. Skeptics (Pinker, Marcus) argue that diminishing returns, hardware bottlenecks, and the complexity of intelligence itself impose natural limits that keep α below 1.
What is the singularity time formula?
For the recursive improvement model I(t+1) = I(t) + η·I(t)^α with α > 1, the finite-time singularity occurs at approximately t_s = I₀^(1-α) / [η·(α-1)], where I₀ is the initial intelligence and η is the improvement rate.
Sources
- [object Object]
- [object Object]
- [object Object]
- [object Object]