The Intelligence Age & o3
On September 23, 2024 Sam Altman wrote “The Intelligence Age” - a descriptive yet prophetic blog post on the future age of superintelligence.
It’s worth reading in full but an abbreviated, re-arranged & highlighted version of the bull-case is:
In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents. […]
How did we get to the doorstep of the next leap in prosperity?
In three words: deep learning worked.
In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it. […]
This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there. […]
I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.
Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot.
It’s easy to dismiss Altman’s optimism as hyperbole, but doing so would be a mistake. As someone who sees the cutting edge of AI development at OpenAI daily, his words carry weight.
Today serves as a reminder to take Altman at his word.
88 days since writing his post, OpenAI announced their newest model o3. Thsi model is better than almost all competitive coders, achieves top scores in competitive mathematics, performs PhD-level analysis and will soon be available to everyone on demand from any computer in the world (except, perhaps, in Europe).
It’s challenging to fully grasp the implications of this. The coding community’s stunned reactions on X are not unwarranted. Consider this: o3-mini (a faster variant) was demonstrated building a UI to execute code, then writing code to benchmark itself—all in minutes. Tasks that would take top-tier human coders exponentially longer were completed effortlessly. (Here is the introduction video for o3-mini if you haven’t seen it.)
By 2025, hiring a world-class coder on demand will cost next to nothing. And this is just the beginning.
Ilya’s last slide
6 days ago, Ilya Sutskever, co-founder of OpenAI, gave a talk at NeurIPS about the future of deep learning. While much of the talk reflected on past progress, the final slide was striking.
Sutskever emphasized that the arrival of superintelligence is no longer a matter of "if" but "when." He outlined key attributes of superintelligence, edited here for clarity:
You know if you joined the field in the last two years then of course you speak to computers and they talk back to you and they disagree and that's what computers are but it hasn't always been the case.
I want to talk to a little bit about Superintelligence because that is obviously where this field is headed.
[…] Sooner or later the following will be achieved:
Agentic: those systems are actually going to be agentic in a real ways whereas right now the systems are not agents in any meaningful sense […]
Reason & understanding: [superintelligence] will actually reason and by the way I want to mention something about a system that reasons: [Until now] we've been used to [very predictable models because we are replicating human intuition.] [But the more it reasons the more unpredictable it will be] and one reason to see that is because the chess AIs the really good ones are unpredictable to the best human chess players so we will have to be dealing with AI systems that are incredibly unpredictable and they will understand things from limited data they will not get confused […]
Self-awareness: Self-awareness is useful [to a system so it will develop this too] […] the kind of issues that come up with systems like
this I'll just leave it as an exercise [for you to] imagine.
My own take is that the rise of Superintelligence feels so dramatic since it’s a mix of multiple historic revolutions happening at once:
It’s like Gutenberg in that it changes the means of knowledge creation and dissemination - of course text can be produced at zero cost, but more importantly code! It’s a printing press moment for software.
It’s like the Industrial Revolution in that it changes the way that industries as a whole will operate (e.g., enabling the advent of a robotics revolution; working with AI agents)
It’s like Copernicus in that it changes the way we see ourselves in the world — human intelligence is no longer superior and never in history has there been intelligence on demand...
The next few years are going to be wild.
Buckle up.