I came across a shocking tweet recently. Check this out:
I use about ¾ of these daily. (No, I don’t have a Snapchat account!)
Yet none of them were around just 20 years ago.
It’s hard to imagine what life will look like 20 years from now, much less 5 years from now.
One way to explain the rapid progression this century is a principle called Moore’s Law.
In the 1960s, Intel’s founder Gordon Moore noticed that computer chips could hold twice as many transistors every two years.
Moore’s Law was born out of this observation.
Today it has come to mean that computers get more powerful, smaller and cheaper over time as their parts shrink.
Roughly doubling in power every two years.
Semiconductor companies use this “two-year rule” to plan their work.
They know they need to create better chips every two years or other companies will get ahead of them.
And this “two-year rule” has been surprisingly consistent.
Take a look at this chart posted on X by Steve Jurvetson, an early VC investor in Tesla and SpaceX.
It shows the accuracy of Moore’s Law all the way back through the beginning of the 20th century:
In his words:
“NOTE: this is a semi-log graph, so a straight line is an exponential; each y-axis tick is 100x. This graph covers a 1,000,000,000,000,000,000,000x improvement in computation/$. Pause to let that sink in.”
He’s saying Moore’s Law is so powerful that an accurate representation of it would make this chart taller than a 10-story building.
Yet what’s happening today with AI is completely blowing it away…
Hyper Moore’s Law
Nvidia’s CEO, Jensen Huang, recently introduced a concept he calls “Hyper Moore’s Law.”
He believes AI computing performance has the potential to blow past Moore’s Law and double or even triple every year.
And he might be right.
From Ankur Bulsara:
“If Moore’s law is a 2X exponential curve, NVIDIA’s last 8 years have been a 2.34X exponential curve. Not only is AI compute increasing exponentially, it is a *steeper* curve than Moore’s law. Maybe the most consequential scale factor this decade.”
This means AI technology is becoming faster and more intelligent at a pace we’ve never seen before.
And I think the best example of this is OpenAi’s new model release.
Back in September of 2024, OpenAI released a new type of AI computing model different from the traditional large language models (LLMs) it introduced with ChatGPT.
It’s called OpenAI o1, and it was designed to spend more time reasoning before responding.
This ability allows it to solve more difficult problems in science, coding and math.
Per the company’s press release:
“We trained these models to spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”
And it’s already proven to be incredibly effective, exhibiting PhD-like intelligence for certain tasks.
Again, OpenAI was released just 3 months ago…
But it has already been updated. OpenAI announced their new o3 model this month.
Here’s what Reddit user MetaKnowing posted when it was launched:
What does all this mean?
The poster above believes that we’ve already achieved artificial general intelligence or AGI.
But Sam Altman defines AGI as:
“Basically the equivalent of a median human that you could hire as a co-worker.”
So I don’t believe we’re quite there yet.
But I do believe it could happen as early as this year.
And whether you’re just starting out in the workforce, you’re already retired or anywhere in between…
The next few years could make the last 20 look like a warm up act.
Regards,
Ian King
Chief Strategist, Banyan Hill Publishing