Over the past two weeks, unusual things have been happening at one of the most important companies in artificial intelligence.

First, Anthropic accidentally exposed part of its proprietary Claude Code system in a public release.

A few days later, it confirmed the existence of a new model… that it’s not going to release.

At the same time, the company is fighting with the Pentagon over how its models can be used, while struggling to keep up with the demand they’re creating.

Individually, these stories are some of the strangest headlines to come out of AI in a long time.

Together, they describe something much bigger.

They show what happens when AI systems become powerful enough that even the companies building them can’t fully control them.

A Crazy Fortnight

Anthropic’s strange couple of weeks started when developers noticed something odd in a recent Claude Code release.

A file had been included that shouldn’t have been there.

Now, that’s not an unusual occurrence. What makes this situation different is what the file pointed to.

It gave outside developers a way back into Anthropic’s internal codebase.

Naturally, they followed it. And once they did, it became clear they were looking at roughly half a million lines of code spread across nearly 2,000 files.

It was enough to map out how Claude Code actually works.

The leak didn’t stay contained. Copies of the code started circulating and were quickly mirrored across multiple repositories.

Turn Your Images On

To be clear, this wasn’t just the surface-level code that handles simple requests or connects to outside services.

It was the layer that lets the system use tools, move between tasks and interact with other software. In other words, the part that actually gets work done.

And as developers read through it, they came to a stunning realization.

Claude Code wasn’t designed to sit idle and wait for instructions. It was built to monitor activity, track changes and act based on what it observes over time.

That means it doesn’t just wait for commands. It decides when to act.

That tells me today’s AI is moving a lot closer to our initial concept of artificial general intelligence (AGI).

A New Model, But Not For You

A few days later, Anthropic dropped another shocking piece of news when it confirmed that it built a new model called Claude Mythos Preview.

But Anthropic isn’t releasing this model. It’s containing it.

Turn Your Images On

Anthropic says Mythos is powerful enough to be misused, particularly in cybersecurity, where it can identify and exploit weaknesses in software.

So, through an initiative called Project Glasswing, the company is only giving access to a controlled group of more than 40 organizations. That list includes major technology companies, infrastructure providers and security firms.

The goal is for these entities to use the model to find vulnerabilities and fix them before someone else does.

According to Anthropic, Mythos has already identified thousands of bugs across widely used systems, including issues that had gone undetected for decades.

One example was a 27-year-old flaw in OpenBSD, software specifically designed to be difficult to break. Another was buried in code that had been scanned millions of times without triggering any alerts.

Just one year ago, AI was being pitched as a coding assistant. Now it’s being used to find flaws in the code itself and, in some cases, figure out how to exploit them.

These capabilities are arriving earlier than most people expected.

Meanwhile, Anthropic is dealing with pressure from multiple directions.

The company has been in an ongoing dispute with the Pentagon after being labeled a supply-chain risk. A federal judge initially blocked that designation, but last Wednesday a court declined to keep that block in place.

Turn Your Images On

Yet demand for Anthropic’s products is exploding.

The company effectively tripled its revenue in just four months, climbing to more than $30 billion as companies rush to adopt its tools.

Turn Your Images On

Image: the-ai-corner.com

By some estimates, it’s now pulling ahead of OpenAI with business customers.

That demand is being driven largely by coding, the same capability now showing up in these more advanced and potentially more dangerous use cases.

But as usage grows, the systems running Claude are starting to feel the strain, including a recent outage that disrupted access.

These stories make it clear that this isn’t a company simply having a chaotic few weeks.

It shows what happens when AI technology starts moving faster than the people building it can control.

Here’s My Take

Taken on their own, the past two weeks at Anthropic look like a mix of unrelated events.

Put them together, and a clear pattern starts to emerge.

Models like Mythos aren’t an outlier. AI systems are getting more powerful, especially in areas like coding and security.

At the same time, the companies building them are starting to lose control over how those systems are used and released.

This could mean that the gap between what leading models can do and what is publicly available will continue to widen, as companies try to manage the risks of releasing increasingly powerful systems.

But even as the risks of AI become clearer, adoption isn’t slowing down. It’s speeding up.

Which means the next phase of AI won’t just be defined by what these systems can do…

It will be defined by how quickly they’re released before anyone fully understands the consequences.

Regards,

Ian King's Signature
Ian King
Chief Strategist, Banyan Hill Publishing

Editor’s Note: We’d love to hear from you!

If you want to share your thoughts or suggestions about the Daily Disruptor, or if there are any specific topics you’d like us to cover, just send an email to dailydisruptor@banyanhill.com.

Don’t worry, we won’t reveal your full name in the event we publish a response. So feel free to comment away!