Four AI Infrastructure Stocks Poised to Surge as Big Tech Invests in Data Highways

Second-quarter earnings are now behind us. That includes quarterly reports from some of the world's largest tech firms – Alphabet (GOOG), Meta Platforms (META), and Microsoft (MSFT).
And one thing is clear: the artificial intelligence ("AI") boom is entering a new phase.
Last year, the Big Tech firms were focused on acquiring specialized semiconductors – graphics processing units, tensor processing units, and custom accelerators – to power their AI models. But the conversation is shifting.
Today, the big constraint isn't acquiring enough compute to power the AI models. The real constraint is transporting all the data resulting from AI.
Think of a car. The vehicle is useful, but only if there's a vast road system to support it. If AI is the vehicle, then the large tech firms need to scale up the "highways."
We're talking about the infrastructure that moves data: networking equipment, fiber optics, and switches.
Because every token generated by AI doesn't just require compute made possible by specialized semiconductors. It requires movement. Across racks, rows, buildings, continents, and oceans, AI models shuttle trillions of bits of data per second.
This quarter made it undeniable: AI scale now depends as much on the bandwidth between chips as the chips themselves.
In this essay, we will explore how AI's explosive growth is shifting the bottleneck from compute to connectivity – and why the next big winners might not be chipmakers, but the companies laying the digital pipes beneath them.
Big Tech Continues to Commit Billions to the AI Buildout
Alphabet was the first of the major hyperscalers to report earnings recently, so we will start there.
In the second quarter, the Internet giant spent $22.4 billion in capital expenditure (capex). Management detailed that about two-thirds of the spending went to servers, and one-third to data centers and networking equipment, the technology needed for our "AI highways."
To put this in context, $22.4 billion represents a 30% increase over the previous quarter, and a 70% increase over the second quarter of 2024.
Meta Platforms is also going bigger – raising the 2025 capex to between $66 billion and $72 billion, hitting the midpoint of which would mean a 27% year-over-year increase. The goal: build out AI-specific data centers, servers, and networks.
Shortly after Meta's release, Microsoft posted its largest capex ever for a single quarter, mostly to feed Azure AI.
Not to be outdone, Amazon (AMZN) reported its own second-quarter capex reached $31.4 billion, largely tied to Amazon Web Services ("AWS") infrastructure. Management described the current pace of spending as representative for the rest of 2025, and targets a fiscal-year total of $118.5 billion.
If the overall trend is difficult to track with all these numbers, the chart below paints the picture.
Taken together, the hyperscalers are sending a clear message: this is a multiyear infrastructure arms race.
It's tempting to think the latest capex surge is a short-term spike. But the reality is far larger.
Brookfield, one of the world's largest infrastructure investors, estimates that over $7 trillion will be poured into AI-related infrastructure over the next decade – including chips, data centers, energy, and fiber.
Need a second opinion? McKinsey pegs the number at $6.7 trillion in global data-center spending by 2030, with $5.2 trillion earmarked specifically for AI infrastructure.
Put simply, the spending we're seeing from hyperscalers is just the opening act. The capex super-cycle isn't a quarter-to-quarter blip – it's a multiyear, multilayer wave that's only gaining strength.
So which companies are directly benefiting from all this outlay? The physical plumbing of AI infrastructure.
AI Infrastructure Winners: Fiber, Optics, and Networking Gear Lead the Way
Corning (GLW), best known as a global leader in glass and fiber optics, saw its enterprise optical communications revenue surge 81% year over year – a clean read on fiber, cable, and connectivity demand from AI builds.
Meanwhile, Lumentum (LITE), a key supplier of lasers and optical components used inside data centers and telecom networks, posted 67% year-over-year growth in cloud and networking revenue. The jump was driven by soaring orders for lasers, electro-absorption modulated lasers, and 800 gigabit optical modules used in data-center interconnect and back-end fabrics.
Equinix (EQIX), the world's largest colocation provider, just posted record interconnection revenue of more than $400 million and raised guidance. Its business revolves around running massive facilities where different networks and cloud providers physically connect.
Digital Realty Trust (DLR), another global leader in data-center real estate, booked record deals with a backlog that has yet to go live.
These aren't one-off wins – they're evidence of a broader trend: AI models don't just demand compute; they demand bandwidth.
There has also been a surprising shift in the networking layer itself. While hyperscalers and component makers are driving demand, the underlying technology stack is evolving, too.
Ethernet's Comeback
Historically, InfiniBand has been the network standard used for AI training and data centers. It's known for high performance and low latency. To continue with our analogy, it's like an eight-lane super-highway.
But something surprising is happening. Plain old Ethernet – the same basic standard that connects your home Wi-Fi router – is staging a comeback.
A new industry group, the Ultra Ethernet Consortium, just released its first spec designed to make Ethernet fast and reliable enough for AI-scale workloads. The idea is simple: take something cheap and universal and supercharge it for cutting-edge computing.
And the upgrades are coming fast. Data shows networks inside data centers jumping to new speeds – think of traffic lanes going from four lanes to eight, then 16.
The companies making this possible include Broadcom (AVGO), which designs the switch chips that direct data traffic; NVIDIA (NVDA), which is packaging Ethernet into its full-stack AI networking solution called Spectrum-X; and Marvell Technology (MRVL), which builds the optical engines and signal processors that let data fly down fiber at terabit speeds.
In plain English: these innovations mean AI chips can talk to each other faster and more smoothly, cutting down bottlenecks and squeezing more performance out of every expensive processor.
What to Watch Next in AI Infrastructure Growth
While the demand is there and the uptrend has clearly begun, the next few quarters will offer some telling signals about whether this infrastructure boom can sustain its momentum.
Ciena (CIEN) reports on September 4, and investors will be watching closely for signs of how much of its growth is coming from 800 gigabit optical shipments and how its business mix is shifting toward cloud and data-center interconnect. Just as important will be management's commentary on tariffs – Ciena has already warned that current structures could run about $10 million per quarter in extra costs.
Lumentum's story hinges on cadence. After posting 67% growth in cloud and networking, the key question is whether that momentum can continue and how concentrated its hyperscaler customer base has become.
Amazon and Alphabet will also be under the microscope. For Amazon, the key is whether its AWS-driven capex – already running at more than $30 billion a quarter – can keep scaling without pressuring margins, and how much of that spend is flowing directly into networking and interconnect.
Alphabet, meanwhile, has guided to record spending levels through 2025, with roughly a third earmarked for data centers and networking. Any updates on its capex mix, the pace of cloud growth, or the strain of AI workloads on its global backbone will be crucial signposts for investors trying to gauge just how durable this infrastructure cycle really is.
The Next Phase of AI Infrastructure: Plateau or Launchpad?
AI models are still improving, but each incremental leap now requires staggering amounts of power and hardware.
Scaling laws that once delivered leaps between ChatGPT models now demand city-sized electricity loads and nation-sized budgets.
This doesn't mean AI progress has stalled. It means the spotlight is shifting. Instead of headline-grabbing demos of AI-generated videos or other flashy applications, the real story over the next few years will be the integration of today's tools into the economy.
Most industries still run on email, spreadsheets, and legacy workflows. The gains will come as AI slips into those processes – reviewing medical scans before a doctor does, drafting code before a developer reviews it, resolving support tickets before they ever reach a human.
In other words, the next phase isn't superintelligence; it's super-integration. Productivity gains will be real, but they'll show up in back offices and factory floors.
And the infrastructure wave we've been discussing here – optics, switches, and colocation – will form the backbone that makes this broad deployment possible.
For investors, the big opportunity lies in the companies and technologies that will facilitate AI's integration into every corner of the economy.
Regards,
John Robertson
P.S. Many of the companies mentioned today are covered in Stansberry Research's Stansberry Innovations Report. All are perfectly positioned to continue profiting as this plays out.
And their upside is even bigger when you consider another, less commonly discussed, catalyst for growth...
In fact, we recently released a special report all about how these four giant innovators are powering a new age of currency. And as editor Eric Wade explains in the report, he believes its rollout could lead to the biggest monetary reset in 100 years. Get all the details for yourself right here.