How Cash Flows Through the AI 'Value Chain'
Dear subscriber,
It's hard to imagine the schedule of a Silicon Valley CEO today...
A couple weeks ago, OpenAI made a $300 billion deal to buy processing power from Oracle (ORCL).
Shortly after that, Nvidia (NVDA) pledged to invest $100 billion into OpenAI as part of a strategic partnership to build 10 gigawatts of AI data centers.
Then, CoreWeave (CRWV) announced that it had added an additional $6.5 billion to its standing deal with OpenAI. A week later, it signed another $14.2 billion deal with Meta Platforms (META) for AI cloud infrastructure.
Under normal circumstances, deals with these many zeros would take weeks or months to hammer out.
But the pace of activity in Silicon Valley is blistering nowadays... with deals worth tens of billions of dollars happening seemingly overnight. You have to wonder if these tech CEOs even sleep anymore.
Many of these deals come as a big surprise, upending strategies pursued for the past three years.
Meta, for instance, developed its own large language model ("LLM") – known as Llama – and has spent billions of dollars in salaries for its own AI research team, plus tens of billions on infrastructure.
But now there's news that Mark Zuckerberg is in talks with Alphabet (GOOGL) to use Google's model to improve Meta's ad business.
Then there's Microsoft (MSFT), which has been a major investor and supplier for OpenAI over the years... but just cut a deal with Anthropic to use its Claude technology in Office 365 apps.
In the AI industry, money and computing power are flying around every which way.
Everything I've mentioned so far happened in September alone. And it may as well take a "superintelligence" to track it all.
Importantly, it's not always clear what every AI or AI-related company does these days... and where the money comes from. And that makes it hard to be a smart AI investor.
So today, we're going to break down the AI "value chain." We'll look at what I call the four layers of the AI industry... show you which ones are the most profitable... and cover the circular nature of AI spending and the risks it poses to the market.
The Different Layers of the AI Industry
The term "artificial intelligence" doesn't have a simple definition. Neither does the concept of an "AI company."
The stocks ripping higher on AI excitement do all sorts of things. Some are extraordinarily profitable... and others (so far) have only burned through cash.
I break down the AI industry into four layers:
- Applications,
- Foundational models,
- Hyperscalers, and
- Hardware and infrastructure.
AI end users – those that actually use AI for everyday tasks – work with the applications. So let's start there...
A lot of people use AI-powered apps like the search engine Perplexity or the code generator Cursor.
These apps are built on top of foundational models made by the big AI companies. And the AI app companies pay to use those models with the money they collect from subscription revenue.
These app companies aren't even close to profitable yet. Since AI compute is so expensive, each user action actually loses these app companies money.
This extends to popular chatbots like OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude.
(As we go along, you'll see that many of these AI companies straddle several layers.)
The chatbots really sit at the application layer. They are ways for consumers and businesses to access AI.
In the case of ChatGPT, it just so happens that the AI bot is built by the same company that built the foundational models... OpenAI.
If we could separate ChatGPT from OpenAI, it's likely that ChatGPT would pay OpenAI more than it makes in revenue. The same goes for Claude and Gemini. These chatbots just aren't profitable yet.
That said, building the foundational models doesn't make you any money, either...
I consider the big five foundational models to be GPT, Gemini, Claude, Llama, and xAI's Grok.
The companies behind these models have massive compute costs. They not only pay to have their models give answers to users (known as "inference")... but they also spend hundreds of millions of dollars on training to improve the models.
These companies don't provide the specific financials, but building models is far from profitable.
Reuters reports that OpenAI brought in $4.3 billion in revenue in the first half of 2025... but posted a loss of about $2.5 billion.
Elon Musk's xAI is burning about $1 billion per month. And Alphabet and Meta are probably spending about the same on their AI businesses.
The foundational models simply send too much money to the next layer: the hyperscalers.
The hyperscalers are the ones building and running massive data centers, which they rent out to other companies looking to train and run their models.
Prior to the AI boom, the three biggest hyperscalers were Amazon (AMZN), Microsoft, and Alphabet, in that order.
As time has gone on, the market has grown. There are now specialty cloud-computing companies like CoreWeave that cater specifically to AI companies. There's also Oracle, which as we mentioned earlier, is now expanding its compute capacity to provide processing power to OpenAI. (The deal sent Oracle's shares up 36% the day of the announcement.)
Of course, the model builders are also becoming their own hyperscalers. Meta, OpenAI, and xAI are all building their own data centers.
This is where all the money is being spent. The hyperscalers have earned outstanding margins by renting out traditional compute.
They have a business with profitable "unit economics," meaning they can sell compute for more than it costs them to supply. But they're also investing that cash right back into the business for the future.
You can see this when you look at the financials of a pure play like CoreWeave. The company has positive cash flow from operations, but negative free cash flow after you consider its capital spending. And that spending is going to the next layer: hardware and infrastructure.
This includes all kinds of building and infrastructure companies – like Nvidia, which makes graphics processing units ("GPUs"), and Cisco Systems (CSCO), which is a networking giant. Of course, to build those data centers, you need wiring, cooling, and electrical power – and a lot of AI spending is going toward companies that provide those things.
The companies at this layer are in full-on boom mode. They boast high revenues and earn big profits. And their stocks show it.
There's just one problem...
This Is Not Self-Sustaining (Yet)
It's clear that the revenue from AI end users is nowhere near the amount being spent on data centers.
The issue is that money isn't flowing through the industry's different layers from the top down.
We still need funding coming into the AI ecosystem at various stages. And each layer is getting this funding from different sources.
At the top, the losses for AI applications are being funded by venture capital.
Move down to the foundational models, and it's a split. For private firms like OpenAI and xAI, the money comes from venture capital. For public giants like Alphabet and Meta, their massive free cash flows from their advertising businesses fund their AI projects.
Hyperscaler funding is also mixed. Behemoth tech companies can fuel much of their own growth with their cash flows. But they're also borrowing.
Despite having $47 billion in cash on hand, Meta sought to raise $26 billion in private debt this summer. Meanwhile, CoreWeave has $14 billion in debt (set against its roughly $2.2 billion in revenue).
The burgeoning private-credit industry is pouring money into AI. UBS Global Research reports that private lending to the tech sector reached $450 billion earlier this year, up $100 billion from the year prior.
However, the AI ecosystem has recently gotten more complicated... and circular.
AI companies with capital are pouring it right back into other layers of the AI stack.
As noted earlier, Nvidia is investing $100 billion into OpenAI. OpenAI will use that investment to... buy chips from Nvidia.
Nvidia has done the same with other companies. It owns nearly 7% of CoreWeave, which uses Nvidia's chips to build its data centers.
These are roundabout deals in which suppliers invest back into their customers, who use the cash to buy more from the supplier.
And this circular nature of AI spending creates risks... It suggests that any crash will be swift and severe.
For industries with a more traditional value chain, changes in the business scale linearly to asset values. If the business prospects for, say, Walmart (WMT) improve a little bit... the stock goes up a little bit.
But this feedback loop makes the system nonlinear... chaotic, even.
For now, it's driving AI higher. And thanks to the almost religious belief in AI from Silicon Valley's tech CEOs, the spending can run for a long, long time.
But tread carefully... because if faith in the AI future does falter, the whole thing can fall apart.
To hear more on the AI value chain – and look at some specific AI investment opportunities – check out today's This Week on Wall Street video, where I sit down with Stansberry Research's Josh Baylin.
You can watch the entire episode on our YouTube page by clicking the image below. Be sure to like and subscribe to get more of our videos.
What Our Experts Are Reading and Sharing...
- Last week, I pointed out that the "bond vigilantes" were punishing world governments for their excessive debts and driving up interest rates. This week, JPMorgan Chase analysts are worried about the "(lack of) structural demand for Treasuries."
- Would you allow someone to record all your phone calls for $40? Lots of people did... then millions of phone calls were exposed on the Internet – every word of every conversation... and the exact phone numbers, too. Here's Stansberry's Josh Baylin on the terrifying data breach (and what it means for AI data-gathering).
- We often talk about what country you should invest in... but what about which state? A fun analysis from Morningstar shows the majority of U.S. outperformance comes from companies on America's Pacific Coast.
New Research in The Stansberry Investor Suite...
AI will be disruptive... but just how disruptive?
Everyone is talking about the next industry that will be wiped out by advancements in AI – manufacturing, transportation, data analysis... the list goes on.
No matter how overdrawn those fears may be, investors are still sending the shares of companies they think will be caught in the crosshairs down.
And this month, Whitney Tilson and the Stansberry's Investment Advisory team have found an opportunity in the madness.
They recommend a company that has returned around 30,000% since going public in 2006. It's a serial acquirer – having acquired more than 1,000 businesses since its founding in 1995. And it's run by a software executive with a ZZ-Top-style beard who took seven years to graduate from college. (Shareholders call him "Software Santa.")
If you're a connoisseur of incredible businesses that use investor capital to generate phenomenal returns for shareholders, you may already know which company I'm talking about.
It's famous among a certain group of investors who love studying capital allocation.
But the point is, some investors consider AI a threat to this business. And shares have taken a dip as a result... now trading for an incredible steal.
What they're missing is that AI isn't competition for this business... but rather, a tool that opens up new opportunities.
Stansberry Investor Suite subscribers can read the entire report here.
If you don't already subscribe to The Stansberry Investor Suite – and want to learn more about our special package of research – click here.
Until next week,
Matt Weinschenk
Publisher and Director of Research
What do you think about This Week on Wall Street? Send any and all feedback to thisweek@stansberryresearch.com. We read every e-mail you send in.