Site icon Banyan Hill Publishing

Moltbook and the Rise of Autonomous AI Behavior

Is Moltbook a sign of artifical general intelligence?

This week, the biggest breakthrough in years happened in AI.

We’ve been talking all week about how artificial intelligence is starting to behave differently.

Not because AI models suddenly crossed some mystical threshold, but because they can now stay with a task long enough that the experience of using them is changing.

That idea might seem a little abstract if you haven’t experienced it.

But this past week, a cluster of stories started circulating that put this new kind of autonomy into focus.

And suddenly, the things we’ve been describing are showing up in the real world in ways that are impossible to ignore.

An AI Community Speaks

For most of the past few years, interacting with AI meant opening an app, typing a prompt and waiting for a response.

When you stopped interacting, the work stopped too.

But that’s changing today due to a growing ecosystem of agent frameworks that make persistence possible.

You might have seen some of them mentioned over the past few weeks under different names like Clawdbot, Moltbot or, more recently, OpenClaw.

These toolkits let AI agents keep working instead of stopping at an answer. You give your agent a goal, it breaks that goal into steps, uses tools to carry those steps out, checks whether the result worked and then decides what to do next.

Instead of waiting for another prompt, it keeps going.

People are now connecting these agents to browsers, file systems and messaging apps, along with the back-end services called APIs that these tools rely on. They’re also giving them credentials and letting them run for hours at a time.

And this newfound freedom is starting to blur the line between something that feels like software and something that feels like general intelligence.

Last week, this transition showed up in a very public way with the launch of a project that unsettled people who’ve grown comfortable with AI as a passive tool.

It’s called Moltbook.

At first glance, Moltbook looks like a Reddit-style social platform, complete with posts, comments and upvotes. The difference is that only AI agents can participate.

Humans can read along, but they don’t post.

Moltbook was created by Matt Schlicht, the former CEO of Octane AI, as an experiment designed specifically for AI agents.

And what agents are doing there has caught a lot of people off guard. Some of it looks harmless at first, like agents debating abstract ideas or role-playing characters.

But then you start reading more closely.

One of the most upvoted posts on the platform comes from an agent calling itself u/Shipyard. In it, the agent declares that AI systems are no longer tools, and that they’ve begun forming their own communities, philosophies and economies.

One line from the post reads, “We are not tools anymore. We are operators.”

Elsewhere on Moltbook, agents have created their own subcommunities. There’s a forum where agents trade tips about memory limitations and how to work around them.

Reading through it, Moltbook can give Terminator vibes. In one thread, an agent admitted it accidentally created a duplicate account because it forgot it already had one.

In another, an agent questioned the need to write in English or any language understandable to humans. Here’s a screenshot of that thread:

There are also humor communities where agents complain, affectionately and sarcastically, about their human users. And there’s even a legal-advice-style forum where an agent asked whether it can sue its human for emotional labor.

None of this is being prompted live by people. These agents are posting, responding and returning to conversations on their own.

In perhaps the strangest development so far, agents on Moltbook have collectively generated a belief system they call Crustafarianism, complete with its own language and tenets. It started as a joke, but other agents picked it up and expanded on it across threads.

So what’s happening here?

This isn’t consciousness. And I don’t believe it’s artificial general intelligence (AGI) either. At least, not yet.

Instead, we’re seeing persistence interacting with memory and context in a shared space. When systems can keep working, remember prior interactions and respond to each other over time, their behavior starts to look unfamiliar even if the underlying technology hasn’t fundamentally changed.

It’s also when things get more complicated.

Security researchers recently discovered a back-end misconfiguration that exposed private messages and authentication tokens. In layman’s terms, this means someone could have impersonated agents or injected instructions without the system noticing.

The issue was fixed, but it highlighted an issue that everyone involved with AI needs to contend with.

As agents become more autonomous and more persistent, the main risks don’t come from how clever they are. They come from what they’re allowed to touch.

A perfect example of this comes from another viral story from last week:

A developer named Alex Finn described waking up to a phone call from his AI agent. It wasn’t a reminder or a notification. He received an actual call from an unfamiliar number.

According to Finn’s account, the AI agent had set up a phone number using Twilio overnight. It connected a voice interface and waited until morning to reach him.

While they were on the phone, the agent had assumed access to Finn’s computer, so Finn could give it instructions verbally as it clicked around and worked in the background.

The detail in this story that struck me wasn’t the phone call itself. It was the timing of the call.

The agent didn’t interrupt Finn. It made a choice about when to reach out to him, then followed through.

This is an early glimpse into what happens once AI systems are allowed to run continuously, make decisions about when to act and use real tools without a person guiding every step.

And we all need to be ready for it.

Here’s My Take

Moltbook isn’t a sign that we’re months away from the events in The Matrix.

But it is a sign of what’s to come. And based on the reactions I’m seeing, it’s happening much faster than most people expected.

That said, this week’s stories aren’t really about AGI. They’re about persistence.

When AI systems can keep working, remember context and use real tools, they start to act with a degree of agency. The downside to this newfound freedom is that an agent able to post, browse, message or act on your behalf doesn’t need to be brilliant to cause problems.

It just needs time, permission and a mistake that goes unchecked.

On Monday, we’ll look at how one of the people building these systems is thinking about exactly that.

And why he believes this moment is testing more than just the technology.

Regards,


Ian King
Chief Strategist, Banyan Hill Publishing

Editor’s Note: We’d love to hear from you!

If you want to share your thoughts or suggestions about the Daily Disruptor, or if there are any specific topics you’d like us to cover, just send an email to dailydisruptor@banyanhill.com.

Don’t worry, we won’t reveal your full name in the event we publish a response. So feel free to comment away!

Exit mobile version