
A Few More Hiccups in the AI Story
Dear subscriber,
This week, Nvidia's (NVDA) quarterly earnings announcement was the most-watched event in the market.
That's not necessarily surprising.
For one, it reports later than most other companies, so it doesn't have to compete for the spotlight. Second, its results offer insight into the sustainability of the AI boom – one of the biggest forces driving the market today... and something nearly every investor is paying attention to.
We'll admit that we also like to check in on Nvidia and other AI-related companies each quarter. It helps us figure out if the spending spree on data centers and other infrastructure will continue.
In truth, Nvidia has become a bellwether for the broader economy. The company's growth has largely been fueled by four major data-center builders – Microsoft (MSFT), Meta Platforms (META), Amazon (AMZN), and Alphabet (GOOGL).
And when these Big Tech companies eventually slow their spending, it could signal a major top in the market.
For now, Nvidia's customers – and investors – are still banking on an AI future.
Even though the path for this emerging tech is a bit murky, Nvidia shares are powering higher. The stock is already up nearly 50% since its April low.
The chip giant posted stellar results last quarter. Total revenue rose 69% year over year. That was largely driven by revenue from its data-center division, which jumped 73% from the year prior. On top of that, it clocked an amazing $26 billion in free cash flow.
Nvidia did say that it would lose $8 billion in sales due to new restrictions on exporting chips to China, but the market seemed to have already priced that in.
Here's the one thing that deserves some deeper attention...
Nvidia specifically called out a surge in "inference."
Let me explain...
AI chips can be used for two things. The first is huge projects to train new models.
But the chips are also needed to then use those models once they've been developed. Using the AI is called inference.
In Nvidia's earnings call, Chief Financial Officer Colette Kress said that the company is witnessing a sharp jump in inference demand. She noted that Microsoft's inference use alone jumped fivefold year over year.
CEO Jensen Huang attributes the rise in demand to newer reasoning models, which "think" for much longer than earlier models. One of the newest coding models reportedly can think about a problem for nearly seven hours straight.
This is an important shift to watch.
For starters, it means AI adoption is picking up. However, it may also mean that model development (i.e., when models go through rigorous training) may be getting less important.
That brings us to another piece of AI news...
Anthropic recently released two new AI models: Claude Sonnet 4 and Claude Opus 4.
These are state-of-the-art. By statistics measures, the models are at the top of the leaderboard on almost every benchmark.
But judging models is tough. After all, there are many different ways to measure "intelligence." And the standard AI benchmarks measure performance on specific challenges. Meanwhile, the tech companies sometimes tune their models to do well on those exact benchmarks... rendering them less useful.
Because of that, much of the judgment that is placed on models comes from vibes and anecdotes.
Whenever a new model is released, the Internet comes alive with countless tests and opinions about its capabilities, trying to see what it does well... and where it lags.
In the case of the Claude 4 family of models, some of the progress has been disappointing.
The models are great at coding, sure. But for more general uses, the progress is less apparent.
Both Claude Sonnet 4 and Claude Opus 4 get low scores when it comes to making predictions on basic physics scenarios... doing worse than the Claude 3.7 models.
Another example... Claude 4 still thinks that 8.8 minus 8.11 is negative 0.31. Claude 4 explains (erroneously), "When subtracting 8.11 from 8.8, you get a negative result because 8.11 is larger than 8.8."
Now, the specific performance of these models probably doesn't matter that much. But it gets at a bigger question – one that's central to the future of AI and investing in it...
How good will these AI models get, and how fast will they get there?
If you listen to some AI researchers, they'll tell you that we'll have "AI superintelligence" (when AI will surpass humans in knowledge and capability) by 2026.
There's no doubt they know more about AI than I do. But I've also been around long enough to see experts get sucked into the hype of their own industry and get a little too excited about the future...
There's a big gulf between AI models that will end up as helpful productivity tools... and ones that exhibit all-powerful superintelligence.
Understanding that is important for investors.
I may sound like an AI skeptic – but I'm just trying to navigate those differences.
The development of AI is in a transition. The first explosion in progress came from scaling. Models chewed up more and more data, and that made them better.
But they've already eaten up the whole Internet... So now, models are improved via a process called "reinforcement learning."
This means models are asked to complete a task, they get graded on whether they did it right, and then they learn to do the task better.
This is a big shift. And it's why models have gotten so much better at coding.
Coding, after all, is simple when it comes to whether you get it right or wrong. The AI companies can throw tons of coding challenges at their models and keep making them better.
But other tasks aren't so easy to grade... such as writing a philosophy essay or creating a tasty recipe.
That's why we've seen some weird things happen. Many users say Google's Gemini 2.5 Pro got better at coding than previous versions... but worse at everything else.
Plus, reinforcement learning involves lots of judgment and fine-tuning. That's why OpenAI's GPT-4o turned into a butt-kissing sycophant.
That brings us back to Nvidia and Huang. On the company's earnings call, he explained...
The best way to think through it is that AI is several things. Of course, we know that AI is this incredible technology that's going to transform every industry, from, of course, the way we do software to health care and financial services, to retail, to... transportation, manufacturing, and we're at the beginning of that...
We're really at the very beginning of it, because the adoption of this technology is really kind of in its early, early stages.
Of course, being in the "early, early stages" could also be interpreted as AI doesn't really do much right now.
And while technologists dream of automated employees and "agentic AI" operating out in the real world, it's not happening yet.
It may be possible... and it'll just take a little more time.
Or it's possible that these big dreams will follow the path of self-driving cars.
Self-driving cars saw rapid progress with machine learning in the 2010s. But then solving the last few problems to actually get them on the road has taken years... and even now, there are still issues.
AI could be a cool tool... or the entire future of humanity. The truth about AI's future will, like most things, lie in the middle. We'll need to keep watching to find out exactly what that looks like.
What Our Experts Are Reading and Sharing...
Outside Nvidia's earnings, the biggest news this week was that a U.S. trade court ruled President Donald Trump's tariffs as illegal. It's also a bit of non-news... in that Trump's camp will fight back to find ways to keep the tariffs in place. Trump already plans to ask the Supreme Court to pause the trade court's ruling, but that's a tough ask.
Much of Wall Street says that "private is the new public." Investors have clamored to get into private equity (and credit) on the pretense that it provides higher, but safer, returns. (That's up for debate.) Through the first three months of the year, though, fundraising in the private-equity space has dropped 35% as investors fret that there's no way to exit these illiquid investments.
Last week, I talked about how Apple (AAPL) faced competition from other AI companies if its own tools couldn't deliver. Around the same time, OpenAI announced that it will spend nearly $6.5 billion to buy Jony Ive's hardware startup. Ive is a legend in the tech industry... and was the designer of the iPod, iPhone, Apple Watch, and more.
New Research in The Stansberry Investor Suite...
Last week, we profiled a promising beverage company that, according to our Stansberry Score, didn't quite make it as a top-tier stock.
In short, the company was riding high on fads... not fundamentals.
This week, we've dug deeper to find something better – a beverage company whose stock has soared nearly 8,000% over the past 20 years.
You may think that means the ride is over... that there couldn't possibly be any more upside from here.
But you'd be wrong.
What's so special about this stock isn't that it sells a lot of its product. (Don't worry, it does that, too.)
Rather, it runs its business in such a way that allows it to generate huge cash flows. That allows it to keep its debt levels extremely reasonable. And, in turn, it gives massive amounts of cash back to shareholders.
As we hinted at last week, this stock could be the next Coca-Cola (KO)... in the sense that it could be one of the greatest wealth-building beverage stocks of all time.
Stansberry Investor Suite subscribers can read the entire report here.
If you don't already subscribe to The Stansberry Investor Suite – and want to learn more about our special package of research – click here.
Until next week,
Matt Weinschenk
Director of Research
What do you think about This Week on Wall Street? Send any and all feedback to thisweek@stansberryresearch.com. We read every e-mail you send in.