Yesterday another hacker tried to Trojan horse my Gmail account.
You’re familiar with the story of the Trojan horse from Greek mythology?
The hero Odysseus and his Greek army had tried for years to invade the city of Troy, but after a decade-long siege they still couldn’t get past the city’s defenses.
So Odysseus came up with a plan.
He had the Greeks construct a huge wooden horse. Then he and a select force of his best men hid inside it while the rest of the Greeks pretended to sail away.
The relieved Trojans pulled the giant wooden horse into their city as a victory trophy…
And that night Odysseus and his men snuck out and put a quick end to the war.
That’s why we call malware disguising itself as legitimate software a “Trojan horse.”
And it goes to show you how the push-and-pull between defense and deceit has endured throughout history.
Some folks build massive walls to protect themselves, while others try to breach those walls by any means necessary.
The struggle continues today in digital form.
Hackers steal money, attempt to halt major commercial flows and disrupt governments by looking for vulnerabilities in the walls set up by security software.
Fortunately for me, the hacking attempt I experienced was easy to see through.
But in the future, it might get a lot more complicated to tell fact from fiction.
Here’s why…
What’s Real Anymore?
Imagine if we could create digital “people” that think and respond almost exactly like real humans.
According to this paper, researchers at Stanford University have done exactly that. From the paper:
“In this work, we aimed to build generative agents that accurately predict individuals’ attitudes and behaviors by using detailed information from participants’ interviews to seed the agents’ memories, effectively tasking generative agents to role-play as the individuals that they represent.”
They accomplished this by using voice-enabled GPT-4o to conduct two-hour interviews of 1,052 people.
Then GPT-4o agents were given the transcripts of these interviews and prompted to simulate the interviewees.
And they were eerily accurate in mimicking actual humans.
Based on surveys and tasks the scientists gave to these AI agents, they achieved an 85% accuracy rate in simulating the interviewees.
The end result was like having over 1,000 super-advanced video game characters.
But instead of being programmed with simple scripts, these digital beings could react to complex situations just like a real person might.
In other words, AI was able to replicate not just data points but entire human personalities complete with nuanced attitudes, beliefs and behaviors.
Naturally, some wonderful upsides could stem from the use of this technology.
Researchers could test how different groups might react to new health policies without actually risking real people’s lives.
A company could simulate how customers might respond to a new product without spending millions on market research.
And educators might design learning experiences that adapt perfectly to individual student needs.
But the really exciting part is how precise these simulations can be.
Instead of making broad guesses about “people like you,” these AI agents can capture individual quirks and nuances…
Zooming in to understand the tiny, complex details that make us who we are.
Of course, there’s an obvious downside to this new technology too…
The Global Trust Deficit
AI technology like deepfakes and voice cloning is becoming increasingly realistic…
And it’s also increasingly being used to scam even the most tech-savvy people.
In one case, AI was used to call a fake video meeting in which deepfakes of a company CEO and CFO persuaded an employee to send $20 million to scammers.
But that’s chump change.
Over the past 12 months, global scammers have bilked victims out of over $1.03 trillion.
And as synthetic media and AI-powered cyberattacks become more sophisticated we can expect that number to skyrocket.
Naturally, the rise of AI scams is leading to a global erosion of online trust.
And the Mollick paper shows how this lack of trust could get much worse, much faster than previously expected.
After all, it proves that human beliefs and behaviors can be replicated by AI.
If You Can’t Beat ‘Em…
And that brings us back to Odysseus and his Trojan horse.
Artificial intelligence and machine learning are changing everything…
So the focus of cybersecurity can no longer be about building impenetrable fortresses.
It needs to be about creating intelligent, adaptive systems capable of responding to increasingly sophisticated threats.
In this new environment, we need technologies that can effectively distinguish between human and machine interactions.
We also need new standards of digital verification to help rebuild trust in online environments.
Businesses that can restore digital authenticity and provide verifiable digital interactions will become increasingly valuable.
But the bigger play here for investors is with the AI agents themselves.
The AI agents market is expected to grow from $5.1 billion in 2024 to a whopping $47.1 billion by the year 2030.
That’s a compound annual growth rate (CAGR) of 44.8% over the next five years.
And that’s something you can believe in.
Regards,
Ian King
Chief Strategist, Banyan Hill Publishing