
Here's How the AI Crash Happens
Dear subscriber,
After a flubbed product release, questions about AI have been ripping through Silicon Valley and Wall Street.
But AI stocks continue to power ahead.
Some skeptics have always dismissed new models and the advances they make.
But the chorus is especially loud right now. And we should think about some of their questions.
That's because at some point AI stocks will have a significant pullback... I promise it's coming.
But many investors are acting like nothing can stop or even slow the AI takeover of the economy.
But no market trend rips higher without bumps in the road. It simply doesn't happen.
And if you study AI technology, you'll see the stocks have the potential for a monster crash.
You need to know exactly how it can happen. That way, you will be ready.
The warning of an AI crash runs contrary to what many will tell you. Lots of "experts" will tell you that the advance of AI is inevitable.
Nothing is inevitable.
Not in markets. And not in technology.
With decades of experience, I've seen this story play out before.
Generative AI is new. But the cycle of tech promises, hype, and speculation is a tale as old as money itself.
I won't claim to know when AI stocks will crash. I can tell you how it will happen.
And most important, I can tell you what you need to watch to stay safe...
The Future Is Not as Close as It Seems
First, watch out for a slowdown in AI progress.
Last week, OpenAI released GPT-5. The reception was tepid.
OpenAI has typically unveiled huge improvements in intelligence with each new version. And it has been hyping GPT-5 for years. But GPT-5 doesn't appear to make a big leap.
It still makes silly errors like counting three letter "b"s in the word "blueberry."
To be fair, with millions of users, you can always find some weird results. There's always something to nitpick. But we need to see more progress on accuracy.
OpenAI CEO Sam Altman admitted that the release seemed "dumber"... but blamed it on a technical glitch.
The consensus seems to be that GPT-5 looks to be more about a new product rather than a new level of intelligence. It combines models into a single interface and should cut computing costs.
But as for being smarter? It's a little better. But users were expecting much more.
"The much-hyped release makes several enhancements to the ChatGPT user experience," the MIT Technology Review says. "But it's still far short of [artificial general intelligence]."
Just to be clear, "artificial intelligence" simply refers to computers that can do something similar to thinking and learning. Large language models ("LLMs") are the most recent breakthrough that became public with ChatGPT. Interacting with an LLM is the closest thing to intelligence we've ever seen from a computer.
Artificial general intelligence ("AGI") is the next stage, where computers approach or even exceed human levels of cognition.
We're not there yet. And that leads to a fundamental question that plagues AI progress today:
Just how smart can LLM actually get? Will the kind of "thinking" that LLMs perform ever become AGI or the promised "superintelligence"?
But it's possible that we're about to hit a hard wall.
After all, LLM progress has come in two stages, making models bigger and then fine tuning them.
The early leaps came from ingesting more data and running more training time.
But this progress had a limit. The LLMs have already ingested all the available data and eaten the entire Internet.
So the model builders turned to fine-tuning them with what's called "reinforcement learning."
Reinforcement learning is more expensive and focused on more specific tasks than growing models. You ask an LLM to perform a task, like solve a math problem or write an essay. Then you grade the result. And the LLM learns by getting that feedback. You hope that getting better at one particular task helps the LLM get better at other similar tasks.
We're still in this stage. And it's not clear that extending the early success from scaling is a fair projection of the future.
AI developers are also facing another upcoming challenge called "model collapse." That's when AI models get trained on AI output, rather than human work. When that happens, the models begin to degrade with further training, rather than improve.
LLMs are trained on the Internet. And if you've been on the Internet lately, you may notice that a lot of it looks like it's written by AI.
Between X, Reddit, product reviews, and other short-form content... you can tell a lot of it is AI slop. When models like ChatGPT start consuming large volumes of low-quality, inaccurate material, they get worse, not smarter.
Finally, LLMs continue to simply make up facts, known as "hallucinations."
It was funny when early AI models created concepts out of whole cloth. Every model has a disclaimer to double-check the output.
They are getting better... But if we're to trust these AI models as "agents" that go and perform real-world tasks for us, even error rates of 1% or 2% could make that impossible. The current AI tools have not proven that they can ever get good enough.
OpenAI reported that GPT-5 reduces hallucinations by roughly 25% to 45% on internal tests.
However, when you run some standard error-rate tests on GPT-5, it shows no actual improvement. (Performance measurement is always complicated with LLMs.)
The same dynamic has played out with self-driving cars.
In the early 2010s, researchers made huge progress in autonomous driving.
It wasn't hard to get a car to pilot itself around simple roads in controlled conditions. It was a breakthrough that seemed impossible just 10 years before.
Everyone got excited and predicted that fully self-driving vehicles were just a few years away.
However, that last 5% or 10% improvement has been incredibly difficult to achieve. We are only now starting to see Waymo and others offer fully autonomous vehicles.
The promise of AI evangelists is that the technology will automate away entire white-collar jobs... it will book your flights for you... and it will run entire businesses on its own.
But it still has problems with basic facts and interacting with the real world.
Making those last few improvements may end up taking a lot longer than people thought just a few months ago.
I'm not saying that AI isn't real or that it doesn't provide useful tools. But I'm simply warning that the expected curve of rapid progress may be bending down a bit.
And some of the AI leaders may be walking back the hype.
In February, Sam Altman published a blog post claiming that OpenAI's goal was to create AGI and that "systems that start to point to AGI are coming into view."
But last week, Altman told CNBC that AGI is "not a super useful term." And separately, when talking about GPT-5, he said, "we're still missing something quite important" when it comes to AGI.
AI Has Broken the Financials of Technology
Investors have a lot to consider. Not only is AI facing some developmental challenges, but the financial math of technology has fundamentally changed.
The magic of big consumer technology products like Google and Facebook is that an additional user costs almost nothing in computer power. Once these platforms were built, more users meant more revenue at a nearly 100% profit margin.
But AI is expensive. Running a query costs real money in terms of computing costs. If you really want to ramp up a tough question, a single query can cost $1,000 or more.
That makes it tough to be profitable.
The only ones making money from AI right now are the ones charging AI companies for services. That means the big cloud providers – Microsoft (MSFT), Alphabet (GOOGL), Amazon (AMZN), and others.
The other winners right now are the picks-and-shovels players who sell the basic tools those cloud providers need.
Nvidia (NVDA) makes high-end computer chips. Cisco Systems (CSCO) provides networking hardware. Comfort Systems USA (FIX) builds cooling systems. Constellation Energy (CEG) supplies power. These firms are enjoying banner years.
But the AI companies themselves don't generate any profits.
In 2024, OpenAI lost $5 billion on less than $4 billion in revenue. It will likely lose even more money this year.
Anthropic – the maker of Claude – will lose billions as well.
You can keep going down the chain...
Companies with AI tools like Perplexity's notebook and Cursor's coding agent keep raising money because they keep burning through cash.
With AI... more users means more compute costs and bigger losses.
That could make user growth a threat, rather than an opportunity.
So here's how the crash happens...
How to Invest Safely in AI
There's no guarantee that AI progress will continue at an exponential rate or even just fast enough for financial markets.
If progress slows, the benefit of using AI won't justify the costs. AI is fun and useful now, but to justify the hundreds of billions in data-center spending, it needs to be a real game changer.
If it's not, the revenue from actual customers won't show up.
The AI companies won't be able to cover their costs, and they'll start falling apart under the weight of their losses.
That would lead the big cloud scalers to slash infrastructure spending. All the AI names would see growth expectations change dramatically – obliterating their share prices.
Some industry watchers say this can't happen.
They point to the spending on AI as proof that it will work.
After all, the smartest people in Silicon Valley wouldn't spend all this money if they weren't certain AI would work, right?
But the smartest minds have gotten it wrong many times before. When you pair technological advances with investment, this sort of crash has happened again and again.
You can see it in the big crashes.
We dramatically overbuilt railroads in the 1800s. Silicon Valley went way overboard building Internet capacity in the late 1990s.
And you can see it in smaller ones. Facebook burned nearly $50 billion pursuing the "metaverse," even changing its name to Meta Platforms (META). Its efforts have amounted to nearly nothing so far.
Blockchain was going to revolutionize financial markets overnight. While cryptos are still around and doing well, the use of blockchain as a technology is much smaller than promised.
So... what do you need to do?
You just need to be careful.
Skilled investors know that success hinges less on what you invest in... and more on the size of your investments.
You should invest in AI. It's big. It's hot. We might hit a wall in two years... or maybe not at all.
But you also need to understand the risk here.
I'm seeing too many investors get too excited.
Back during the dot-com boom, millions of investors held nothing but Internet stocks. And they were right. The Internet was huge. But in between, Internet stocks collapsed 90%.
The S&P 500 Index is heavily weighted toward technology right now. If you own exchange-traded or mutual funds, you already hold a lot of tech. The AI boom is driving all of those tech stocks.
It's a cliché... but stay diversified.
Investors hear that all the time. But times like these are when it really matters.
It's times like these that investors feel the temptation of big gains and throw prudence by the wayside.
Nothing is a sure thing. And as prices rise, the risk of a painful collapse goes up.
AI can absolutely stumble as a technology... and crash as a business.
So monitor your AI exposure carefully. Own lots of different stocks and sectors. And know that markets can make you feel wrong in the short term, even if AI is our long-term future.
What Our Experts Are Reading and Sharing...
-
The Economic Innovation Group put together a great study showing that AI is not yet causing job losses. The public policy organization ranks jobs by how likely they are to get replaced by AI. (Budget analysts are at high risk, dancers at low.) And the researchers find the occupations most at risk of AI are doing just fine. They have low unemployment rates, and few at-risk workers are switching occupations. It's early, of course. But no "job apocalypse" yet.
-
Inflation held steady in July, despite expectations it would rise a bit. Looking at tariff-sensitive sectors, footwear was up a bit, but appliances fell. But the producer price index – which measures the costs manufacturers and other businesses pay for inputs – showed a surprising jump of 0.9%. Often, the producer price index leads, with the consumer price index following.
-
In usual times, we skip the Federal Reserve appointment drama. These are not usual times. President Donald Trump has cut his candidate list to "three or four" to replace current Fed Chair Jerome Powell, whom Trump calls a "numbskull" and a "moron"... despite appointing him to the position in 2018.
New Research in The Stansberry Investor Suite...
What if you saw a business worth about $20 billion in revenue? But you weren't allowed to compete with it. You just had to sit and watch as it raked in cash.
Then, all of a sudden, the rules changed, and anyone could vie for a piece of that money for themselves... and yet... almost no one did anything?
That's the story of a specific sector of the generic pharmaceutical market.
Generic drugs are copycat drugs that come in to compete with drugs that have come off patent. Once the patents are expired, anyone can join the competition. That drives down drug prices.
But in the last couple decades, drugmakers have developed a new kind of treatment called a "biologic." While most drugs were simple chemical compounds (called "small molecules"), biologics were more complicated and harder to produce. They have to be carefully cultured in living cells.
So when biologics have gone off patent... they've often stayed expensive and profitable.
But this month, in Stansberry Innovations Report, John Engel and the team have found a company aggressively attacking this rich market.
It's led by a man who already created a $3 billion global pharma company from essentially nothing.
And now he's doing it again with a new business.
The company is still small, but the potential market for biologics is huge... on the order of $232 billion.
And this company just turned profitable (on an earnings before interest, taxes, depreciation, and amortization, or EBITDA, basis), which should help earn it attention from the market and more growth as it invests in new drugs.
Stansberry Investor Suite subscribers can read the entire report here.
If you don't already subscribe to The Stansberry Investor Suite – and want to learn more about our special package of research – click here.
Until next week,
Matt Weinschenk
Director of Research
What do you think about This Week on Wall Street? Send any and all feedback to thisweek@stansberryresearch.com. We read every e-mail you send in.