Nick Bostrom, the Oxford philosopher who wrote Superintelligence in 2014, has reframed his decade-long argument. In a recent long-form interview and an accompanying 2026 paper, 'Optimal Timing for Superintelligence: Mundane Considerations for Existing People,' Bostrom shifts the central question of AI safety from whether humanity should build superintelligence to when it is optimal to do so.
The argument runs in two parts. First, advances in AI will produce systems that exceed human cognitive performance across virtually every domain — Bostrom's long-standing definition of superintelligence — and the only way humans retain meaningful agency is by understanding the decision-making procedures of the systems they live alongside. Interpretability, in this frame, isn't a research nicety; it's the prerequisite for not being managed by a mind that can outthink you.
Second, and this is the new move, Bostrom argues the optimal arrival time for superintelligence is not 'as late as possible' or 'as early as possible' but somewhere in between — calibrated against the lifespans of currently living humans, the rate of progress on alignment, and the geopolitical risk window. Push superintelligence too late and the people alive today don't get to benefit from the upside. Push it too early and you arrive without the alignment toolkit.
The political reading is striking. Bostrom is widely associated with cautionary, slow-down arguments, and the new paper is explicitly more hurry-up than his earlier work — closer to the 'don't waste the lives of people alive now' framing that has become central to the effective accelerationist camp. That a foundational figure in the AI-safety canon is publishing optimal-timing math rather than precautionary-principle arguments is itself a signal of how the intellectual frontier of the field has moved in 2026.