One of the newest stocks in our Extreme Fortunes model portfolio is seeing a strong surge.
On Tuesday, the company announced a major contract — worth over $100 million — from a key player in the international defense sector.
And as of this morning’s opening bell, its stock is up 56% since we added it to our portfolio.
In a roundabout way, this company’s success ties to a recent message we received from a Daily Disruptor reader.
Let me explain.
After the DeepSeek-related stock meltdown last week, Donald H. wrote in with an interesting thought:
I’m recalling the demise of the Betamax VCR in the face of the less expensive though lower quality VHS system that was more generous in its licensing of the technology. Will we see that sort of a ‘Good System vs. a Good Enough System that costs less’ slow fade of super-GPU development?
It’s a great question, but it isn’t an apples-to-apples comparison.
For one, it seems that the initial claims about Chinese startup DeepSeek’s cost-efficient AI were… exaggerated.
DeepSeek claimed its R1 model only cost $6 million and 2,048 GPUs to train.
But industry analysts found that the company invested $1.6 billion on hardware, including 50,000 Nvidia Hopper GPUs.
These are top of the line GPUs used for building high-performance AI systems.
What’s more, the analysts found that DeepSeek spent closer to an estimated $944 million on operating costs.
That final training run might have only cost $6 million, but the entire enterprise was much more expensive.
And when you dig deeper into the release of DeepSeek-R1, you’ll find that there are more factors at play here…
The DeepSeek Difference
As I mentioned last week, DeepSeek was able to “hack” the normal way of scaling AI models by having a better model generate the data for them.
OpenAI’s o1 reasoning model is able to think through the steps of a problem, and it uses that chain of thought to come up with answers.
It turns out that by essentially distilling OpenAI o1, DeepSeek was able to train its AI model much faster and more efficiently.
And it does seem that the R1 model is more efficient to run than existing AI models in the U.S., including o1.
But the biggest factor that makes the release of DeepSeek-R1 such a game-changer is that its model is open-source.
OpenAI and Anthropic keep the algorithm and training data that drive their ChatGPT and Claude AI models a secret.
Google and Meta call their AI models open-source, but their training data sets haven’t been made public, and licenses restrict the models.
But DeepSeek made its R1 model available for anyone to download, copy and build on.
As I mentioned in last Friday’s Daily Disruptor, the Jevons paradox tells us that with cheaper and more efficient AI becoming available, we should see an increase in its use.
This will almost certainly help accelerate innovation in the AI space.
By reducing the need for developers to work on specialized models, they can focus on creating specialized applications.
This should get more people to start using AI, and it will help us start solving real-world problems with AI.
But what does all this reveal to us about Donald’s potential: “slow fade of super-GPU development?”
Here’s My Take
As you know, my thesis is that the Trump administration will spearhead a “Manhattan Project” to win the race to artificial superintelligence (ASI.)
I believe DeepSeek-R1 has changed the trajectory of this project…
But not in the way you might think.
Marc Andreesen called DeepSeek’s R1 release: “AI’s Sputnik moment.”
And I agree with him.
But remember how that moment played out.
Sure, the former U.S.S.R. launched the first manned satellite…
But that event acted as a catalyst for the U.S. to land on the moon first and ultimately win the space race.
I believe the same thing is about to happen with AI and Trump’s “Manhattan Project.”
China might have launched an amazing model, but America will win the race to ASI.
Here’s what reader Glenn R. wrote in to say about the state of AI today:
I’m a retired electrical engineer and an early adopter of Chat GPT (free version). This era is reminiscent of the early days of personal computers.
These [early computers] relied on crude (read inexpensive) magnetic tape (Phillips cassette) and floppy disk storage (an IBM development), and some versions of Tiny Basic and MS-DOS.
The driver was the widespread need for personalized productivity. That need still exists.
He’s saying that we’re still in AI’s infancy. And he’s right.
The technology is just progressing at a rate that we’ve never seen before.
U.S. companies can learn from what DeepSeek did to create cheaper and more efficient AI models.
Instead of a BetaMax vs. VHS situation, companies could simply offer different tiers of AI depending on customer needs.
But we have to have advanced hardware to win the race to ASI.
Think about it this way. Today’s PCs are more powerful than the ones we built 10 years ago, but that doesn’t mean we’ve stopped improving the CPUs that run them.
If anything, AI and “super-GPU” technology development should go hand in hand.
We also need to massively grow our AI infrastructure if we want to achieve ASI first.
Again, the Jevons paradox tells us we can expect more people to use AI even as it becomes cheaper and more efficient.
And this will continue to drive up energy and data storage needs.
Look, for all the hand-wringing about DeepSeek reducing the need for hyperscaling our AI infrastructure…
Google seems unphased. The company just committed $75 billion to develop AI this year.
To me, that’s the biggest news of the week.
It confirms that hyperscalers aren’t cutting back.
And I believe it’s a positive sign for Trump’s “Manhattan Project” moving forward.
As this situation develops, companies like the one in our Extreme Fortunes model portfolio that’s already up 56% could increasingly benefit from bigger government contracts…
Giving investors the chance to make a potential fortune.
Regards,
Ian King
Chief Strategist, Banyan Hill Publishing