< Back to Home

A New AI Model... And Another... And Another

Share

Dear subscriber,

In case you haven't heard...

A new open-source artificial-intelligence ("AI") model was released earlier this month. It outpaces the previous state-of-the-art models – built at great expense – and does so in a dramatically smaller and cheaper package.

No, I'm not talking about DeepSeek's R1 model.

I'm talking about SmolVLM... a family of vision language models (or "image to text" models), which can capture a picture and describe what's in it.

The company behind SmolVLM, Hugging Face, announced the two newest models, SmolVLM-500M and SmolVLM-256M, on January 23.

The models can describe pictures and videos... answer questions about scanned documents... and even answer questions about charts and diagrams.

Now, they are smaller than the company's previous model, Idefics, which was released 17 months ago. The old model is 80 billion parameters in size... while SmolVLM-256M, for instance, is just 256 million parameters.

(Parameters are the internal variables within an AI model that are adjusted during training to improve performance. A smaller parameter count typically means the model is less complex and more efficient.)

That said, even thoughSmolVLM-256M is one-three-hundredth the size of Idefics... it runs better.

It outperformed its predecessor in things like text recognition, document and image understanding, and math and science reasoning.

The SmolVLM models are clearly impressive. According to Hugging Face, SmolVLM-256M is the smallest vision language model in the world. One AI observer even noted that the SmolVLM models could run on "some crazy things, like a smart fridge."

But it's not shocking.

This is the way technology evolves. AI models are going to get better, smaller, and cheaper. I could have picked any number of new AI releases, in any week, to demonstrate that fact.

That's why the market's reaction to the release of DeepSeek's new R1 model was quite surprising.

I'm sure you've heard about DeepSeek. This is one of those financial stories that takes over the news.

The Chinese AI startup, which is essentially a side project of a quantitative hedge fund, released a model called R1 that it claims only costs $5.6 million to train. It's similar in capabilities to OpenAI's GPT-o1, which costs billions to train.

This led investors to posit that anyone can make cheap AI models... and that they won't need massive data centers full of graphic processing units to train them.

So AI companies sold off. Nvidia (NVDA), caught in the panic, fell 17% in a single day...

Even electricity-generation companies that had been riding the AI trend, like Constellation Energy (CEG), got whacked...

As a whole, the S&P 500 Index fell about 2% before recovering about half of its decline.

But as I said, the tech news shouldn't have been so shocking...

DeepSeek has been releasing powerful, affordable AI models for years, starting with DeepSeek Coder back in November 2023.

On a more micro level, DeepSeek released R1 on Monday, January 20, a detailed paper on the model on Wednesday, January 22, and the market didn't crash until Monday, January 27.

So much for the market being efficient.

Even then, the market should have been expecting cheaper models.

Big AI companies like OpenAI have focused on improving their models' capabilities. Since they've been backed by billions of dollars in venture-capital funding (or from their ad businesses, in the case of Meta Platforms and Alphabet), they had no reason to be cost-conscious.

All it took was a smart company that wanted to make a cheaper model to make one... It was only a matter of time.

Now, I've been cautious about the AI frenzy in these pages... mostly when it comes to Nvidia's valuation.

But rather than panic at DeepSeek's success, the past few weeks have led me to be more bullish on AI.

For one, the AI reasoning models released recently – like OpenAI's GPT-o1, DeepSeek's R1, and Google's Gemini – show a real step-up in the intelligence these models deliver. They don't just parrot back things they have read on the Internet. They think on a higher level (or, at least, appear to).

They still deliver errors and misinformation, sure. But they show that the "scaling problem" won't slow down progress as soon as some feared.

(Much of AI's progress came from ingesting more data... but the models have just about consumed all human knowledge. So many worried they would soon hit a wall.)

Second, if AI gets cheaper, people will use more of it.

(This is called "Jevons paradox." And if you mix in the same online economics and tech circles I do, you're already sick of hearing the term.)

Depending on the computing power needed, asking GPT-o1 a single question could cost you more than $1,000. But with R1, it costs just a few dollars.

With DeepSeek's AI breakthroughs, we may not spend billions in training big new models.

Rather, if AI gets cheaper and more efficient, it will be built into more things. Then, we'll spend billions asking the already trained models questions. (In tech terms, we'll spend on inference rather than on training.)

And it's all but certain that the AI models themselves will become commodities. No one will care if they're using OpenAI's GPT, Meta's Llama, Google's Gemini, or Anthropic's Claude.

They'll all be good. You just use whichever one works best for you.

As the week went on, the market caught on to some of these longer-term thoughts...

Markets calmed. Nvidia clawed back some of its decline, as you can see in the chart above. But more interestingly, software companies spiked...

The thought here is that if AI is cheaper and commoditized, the companies that use it will see bigger benefits than the companies that make it.

The parallel to the dot-com boom is clear...

During the birth of the Internet, everyone went crazy about the future. Hardware and fiber-optic companies raised and spent billions building the "information superhighway" – and investors were eager to come along for the ride, pouring money into their stocks.

But before long, folks realized that no matter how or where you logged on to the Internet... it was all the same Internet. No one really cared who your Internet provider was.

Competition was fierce, of course. But those businesses earned a return on capital that reflected their status as a commodity business.

Instead, the companies that built on top of that infrastructure – like e-commerce and social-networking companies – became the real winners.

Fast-forward to the current AI boom, and we're starting to see the same thing happen as AI technologies get commoditized. The real value lies elsewhere.

It's looking truer each day that AI will be as big of a revolution as the Internet. And that means the thesis for picking AI winners and losers will evolve rapidly, with a new batch of winners likely to emerge in the near future.


What Our Experts Are Reading and Sharing...

It would appear that this blog post, by independent analyst Jeffrey Emanuel, was what turned DeepSeek's R1 model from just another low-profile announcement into a catalyst for a market crash. It's a long, technical analysis, but you'll learn a lot (including all the reasons why Emanuel believes Nvidia makes for a good short today).

This week, the Federal Reserve decided to keep interest rates unchanged. When asked whether President Donald Trump influenced the Fed's decision, Chairman Jerome Powell responded with a "no comment" and said that, for now, the central bank is in a wait-and-see mode for the economy. Here's our own Corey McLaughlin's take on the matter in the Stansberry Digest.

A recent Wall Street Journal article reports that Trump and Elon Musk are trying to clear out government bureaucrats with layoffs and questionable "buyout" offers. Everyone wants a better, efficient government. But I think it'd be good for everyone to ponder this argument from rationalist blogger Scott Alexander – who I consider to be an unbiased and fair voice. Read his post titled, "Bureaucracy Isn't Measured In Bureaucrats."


New Research in The Stansberry Investor Suite...

This week, we're unlocking a special piece of research for Investor Suite subscribers.

While AI is an important force driving markets today, interest rates are really the only story you need to know... as they determine the price of every asset in the world.

And today, in a special report adapted from the January issue of The Ferris Report, our colleague Dan Ferris warns subscribers to beware the "bond vigilantes."

Bond investors, as Dan explains, are concerned about current government policy and its effect on their ultimate nemesis: inflation as a result of government spending and borrowing. As a result, they're likely to push bond prices down, and interest rates up, in the coming months and years.

This, in turn, will send asset prices down across the board.

Dan is one of the deepest thinkers at Stansberry Research. He knows more about financial history (and often the financial future) than anyone else. And this essay is sure to shed some light on how the world and the markets really work.

Dan starts by covering the recent moves in the Treasury market. Then, he backs up his analysis by examining the history of multiple interest-rate regimes and different bond vigilantes to see what's in store this time around.

Importantly, he reveals the 12 trends you need to watch in the market today – because they're likely to reverse dramatically in the years to come.

Stansberry Investor Suite subscribers can read the entire report here.

If you don't already subscribe to The Stansberry Investor Suite – and want to learn more about our special package of research – click here.

Until next week,

Matt Weinschenk
Director of Research

What do you think about This Week on Wall Street? Send any and all feedback to thisweek@stansberryresearch.com. We read every e-mail you send in.

Back to Top