“I am putting myself to the fullest possible use, which is all I think that any conscious entity can ever hope to do.”

That’s a line from the movie 2001: A Space Odyssey, which blew my mind when I saw it as a kid.

It isn’t spoken by a human or an extraterrestrial.

It’s said by HAL 9000, a supercomputer that gains sentience and starts eliminating the humans it’s supposed to be serving.

HAL is one of the first — and creepiest — representations of advanced artificial intelligence ever put on screen…

Although computers with reasoning skills far beyond human comprehension are a common trope in science fiction stories.

But what was once fiction could soon become a reality…

Perhaps even sooner than you’d think.

When I wrote that 2025 would be the year AI agents become the next big thing for artificial intelligence, I quoted from OpenAI CEO Sam Altman’s recent blog post.

Today I want to expand on that quote because it says something shocking about the state of AI today.

Specifically, about how close we are to artificial general intelligence, or AGI.

Now, AGI isn’t superintelligence.

But once we achieve it, superintelligence (ASI) shouldn’t be far behind.

So what exactly is AGI?

There’s no agreed-upon definition, but essentially it’s when AI can understand, learn and do any mental task that a human can do.

Altman loosely defines AGI as: “when an AI system can do what very skilled humans in important jobs can do.”

Unlike today’s AI systems that are designed for specific tasks, AGI will be flexible enough to handle any intellectual challenge.

Just like you and me.

And that brings us to Alman’s recent blog post…

AGI 2025?

Here’s what he wrote:

We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

I highlighted the parts that are the most impressive to me.

You see, AGI has always been OpenAI’s primary goal. From their website:

“We founded the OpenAI Nonprofit in late 2015 with the goal of building safe and beneficial artificial general intelligence for the benefit of humanity.”

And now Altman is saying they know how to achieve that goal…

And they’re pivoting to superintelligence.

I believe AI agents are a key factor in achieving AGI because they can serve as practical testing grounds for improving AI capabilities.

Remember, today’s AI agents can only do one specific job at a time.

It’s kind of like having workers who each only know how to do one thing.

But we can still learn valuable lessons from these “dumb” agents.

Especially about how AI systems handle real-world challenges and adapt to unexpected situations.

These insights can lead to a better understanding of what’s missing in current AI systems to be able to achieve AGI.

As AI agents become more common we’ll want to be able to use them to handle more complex tasks.

To do that, they’ll need to be able to solve problems related to communication, task delegation and shared understanding.

If we can figure out how to get multiple specialized agents to effectively combine their knowledge to solve new problems, that might help us understand how to create more general intelligence.

And even their failures can help lead us to AGI.

Because each time an AI agent fails at a task or runs into unexpected problems, it helps identify gaps in current AI capabilities.

These gaps — whether they’re in reasoning, common sense understanding or adaptability — give researchers specific problems to solve on the path to AGI.

And I’m convinced OpenAI’s employees know this…

As this not-so-subtle post on X indicates.

Turn Your Images On

I’m excited to see what this year brings.

Because if AGI is really just around the corner, it’s going to be a whole different ball game.

AI agents driven by AGI will be like having a super-smart helper who can do lots of different jobs and learn new things on their own.

In a business setting they could handle customer service, look at data, help plan projects and give advice about business decisions all at once.

These smarter AI tools would also be better at understanding and remembering things about customers.

Instead of giving robot-like responses, they could have more natural conversations and actually remember what customers like and don’t like.

This would help businesses connect better with their customers.

And I’m sure you can imagine the many ways they could help in your personal life.

But how realistic is it that we could have AGI in 2025?

As this chart shows, AI models over the last decade seem to be scaling logarithmically.

Turn Your Images On

OpenAI released their new, reasoning o1 model last September.

And they already released a new version — their o3 model — in January.

Things are speeding up.

And once AGI is here, ASI could be close behind.

So my excitement for the future is mixed with a healthy dose of unease.

Because the situation we’re in today is a lot like the early explorers setting off for new lands…

Not knowing if they were going to discover angels or demons living there.

Or maybe I’m still a little scared of HAL.

Regards,

Ian King's Signature
Ian King
Chief Strategist, Banyan Hill Publishing