
In This Episode
In this week's Stansberry Investor Hour, Dan and Corey welcome John Sviokla back to the show. John is an author, executive fellow at Harvard Business School, and co-founder of GAI Insights – an industry analyst firm that provides leaders with the strategies for successful AI integration.
John kicks things off by recapping his analysis on AI in the markets since he last spoke with Dan and Corey and sharing the changes that have occurred. He then discusses his focus on DEF 14As to gain insight into what's incentivizing management. He mentions that more CEOs have adopted AI usage – however, there are two main groups: the leaders who are advancing rapidly and the laggards who are making slow progress. And he shares the many variables that impact folks' finances today...
There's no question that the majority of labor is getting less of the economic value out of our society. Forget the wealth effect. That, too. The other thing I think people totally misread, one of the core things, which is that it's not wealth differential, it's risk differential. So, the average worker today has longevity risk. They don't have enough money. They have health care risk... There's education risk. So those risks – longevity risk, health risk, education, ancient risk – those are the things that people wake up [about] at night.
Next, John expresses his desire for the funding of a public library for AI so users have a database to train their models. He also states that the U.S. has lost ground and intellectual property to China in the AI field and other areas due to companies wanting market access. And he says that using AI is something that needs to be experienced to see how useful it can be, especially with automation...
If you [don't] have an experience talking with this silicon hive mind that has read literally everything that's been written, you don't know what that's like... I think the ripple effect of that productivity at the individual level is going to be fantastic. But then it's like special forces. We used to compete with regular army. And special forces, we cross-train, we upscale, we take traditional technology, and we modify it to make it better. And just like with special forces, regular army might have taken 100 people to do something special forces might be able to do with six.
Finally, John provides advice for parents who want to know what career opportunities are available for their kids. There are four areas that he thinks are most crucial in today's tech-driven world. John discusses robots in the tech industry and gives his praise for Waymo. He then reflects on the sectors that he's most interested in. And he believes that folks are wrong about AI being in a bubble – rather, he thinks that there's overinvestment in that area...
OK, first of all, let's separate two things. Is there overinvestment and overpricing in certain things? Of course, it's a market. On the other side, is there an air bubble on adoption? Absolutely not. Adoption is accelerating. So pricing is different than adoption. Then the second thing is, will everybody make money? Of course not. That's why it's called investing.
Click on the image below to watch the video interview with John right now. For the audio version, click "Listen" above.
(Additional past episodes are located here.)
This Week's Guest
John Sviokla is the co-founder and chair of GAI Insights. He was a partner at PwC, vice chairman of Diamond Technology Partners, and a Harvard Business School professor, where he pioneered AI research and courses. He is the co-author of The Self-Made Billionaire Effect: How Extreme Producers Create Massive Value.
John earned his doctorate, master's, and bachelor's degrees from Harvard University. He was named an executive fellow at Harvard Business School to develop cases for the Master of Business Administration and executive education programs and is a Forbes contributor.
Dan Ferris: Hello, and welcome to the Stansberry Investor Hour. I'm Dan Ferris. I'm the editor of Extreme Value and The Ferris Report, both published by Stansberry Research.
Corey McLaughlin: And I'm Corey McLaughlin, editor of the Stansberry Daily Digest. Today we talk with Dr. John Sviokla of GAI Insights.
Dan Ferris: John is an AI guru. And get a pen and a piece of paper because you will want to take notes. He's just that kind of a guy. He's a great talker. We're going to throw a few questions at him, start him up, and he's got a lot to say. So, get ready. We're going to talk with John Sviokla. Let's do that. Let's do it right now.
John, welcome back to the show. It's good to see you again.
John Sviokla: Dan, great to be here.
Dan Ferris: So, Corey and I are going to pepper you with lots of good questions. But I wanted to start by telling our listeners – if you could just tell our listeners sort of what you do because most of our guests, they are not like you. Most of our guests are hedge-fund managers and traders and analysts and equity analysts, all those people. You're a little different. What do you do, John?
John Sviokla: Yeah, well, really kind of a lifelong passion of mine is to try to understand what – how do machines think, how do people think, and what does that mean for business, society, and individual productivity? And so, I started that as a professor at Harvard Business School many years ago, and what I do right now is run a company called GAI Insights, and we believe that every firm is going to have to become a hybrid firm, a hybridization of human workers and digital workers. And our purpose in life is to make people aware of that and to manage that transition in a productive way. So, we do research, training, strategy, and community to help people accelerate that.
Dan Ferris: The training sounds fascinating to me. Maybe we'll get into that.
John Sviokla: Sure.
Dan Ferris: But I wanted to revisit something that we discussed last time. And you had told us that folks who work with words, images, numbers, and sounds, the WINS – W-I-N-S – WINS framework would be most impacted by AI. And then you kind of put some numbers on it for us as investors. You said we did some research with Valens Research. We looked at which companies are WINS-intensive, lots of words and images numbers and sounds, and you said, "We think 50% of the market cap and 50% of the profit of the entire publicly traded market is up for grabs with generative AI and AI." You think we're still at 50% or you think we're more?
John Sviokla: Maybe more, because I think there's a knock-on effect of lots of market cap getting created to build infrastructure to support AI. So, it's a secondary effect. The primary effect is clear. I mean, you look at what happened to Chegg's stock price. You look at what happened to Gartner's stock price. You look at what happened at Accenture's stock price. Gartner's down 50%, Accenture's down 35%, because the market is not yet convinced that they know how to be a hybrid organization.
Dan Ferris: How do – do you think they could – do you think – Accenture we mentioned last time – or whichever one you want to take for an example or whatever. Do you think they can? Do you think they're on their way?
John Sviokla: I think Accenture can. Gartner, I'm not so sure. And I'd be a little careful because we compete with Gartner in some areas. So, I'm not bashing a competitor here, but the reason is if you look at – if you dig into the DEF 14A and you understand the [compensation] for the board and the senior executives – and I'm on a public board, I've been on a number of public boards, so I – and I've been on comp committees and so forth. Their short-term and long-term compensation is based on two numbers: [earnings before interest, taxes, depreciation, and amortization ("EBITDA")] growth and the growth of long-term contracts. And the problem with that is that if you're Gartner, you're going to have to go from 24,000 people probably down to 5,000 people and you're going to have to build a bunch of assets that – unless you grow up phenomenally. OK, so whatever. I'm talking about the same revenue.
Dan Ferris: Right. Other things they can do.
John Sviokla: Yeah, you're going to have to build assets that are going to – in the early going could have a return on investment within a given year, but it's going to take two, three, four years for these assets, data assets, process assets, AI assets to pay off. So, if you're incenting people on EBITDA, that penalizes that investment in those assets. And so, you're going to be disincented to actually build the assets that are going to make you more productive. So, unless I see a change in the senior management compensation at a Gartner, I don't think they're going to get there fast enough.
Accenture, I think, understands it because Accenture has an asset measure in the very top of the house in terms of return on assets. If I were running Gartner, I would give it EBITDAR, and the add back in on EBITDA would be research and development and the creation of software and data assets, and I would amortize it over a four-year period, because that's the operational reality of building cognitive assets.
Dan Ferris: I see. Incentives matter, in short.
John Sviokla: Well, especially at the top of the house. We're talking – for your viewers, the DEF 14A lays out my – I'm on the board of InfuSystems. It lays out my comp, everybody else's comp. When I was a named officer at Diamond, same thing, even though it was inside the firm. So, yeah, incentives drive behavior. Funny how that works.
Corey McLaughlin: Yeah, executive incentives aside, which is really important – a lot of people overlook in the investing world – the last time we had you on was November 2024. You said at the time kind of CEOs, executives didn't really have any hands-on experience with AI tools for the most part.
John Sviokla: Yes. Yes.
Corey McLaughlin: Has that changed at all? Or is that still the case?
John Sviokla: Yeah, it has improved. It's changed. But what we're seeing is a bifurcated market. So, there's leaders and there's laggards and the leaders are moving ahead faster. So, anytime you see an average number – MIT had this study, which if you look at it was – the methodology is really terrible. But let's just take their thing. Ninety-five of generative AI stuff doesn't yield value. By the way, they're an outlier. Nobody else is reporting that. Wharton's reporting over 70% in OpenAI – anyway. But let's just say that for an example. The problem is that you don't want to look at the average, because there's a learning effect here. And one are the big differences, people talk about AI and generative AI being a foundational technology, general purpose technology. I think that's true. But it's really not a technology. It's a capability.
And the distinction I make there is a technology can be bought, implemented, and you get the yield. So, if I buy a faster welding machine with robots, I can do the [return on investment], I can get this many more auto frames through it, the whole routine. When I'm talking about AI or generative AI, I'm talking about how people think, how the machine thinks, and I'm evolving that over time. It's much more organic. So, I'm growing a capability and I'm buying a technology. And we still see the minority of companies buying – growing the proper capability. The problem with this, Corey, is that the market is going to mark your – if you're a public or if you're a private transaction, you have public comparables. The market is going to discount your company – and we're seeing this. It's going to discount your company if they're convinced that you have the old model, not the hybrid model.
That's what's happening at Gartner. That's what's happening at Chegg. And so, Gartner is still a very healthy company in terms of revenues coming in and profit going to the bottom line. But its valuation has been hugely discounted because the market believes they don't have the right model anymore.
Dan Ferris: Yeah, that is the stock market looking ahead effect that you've heard so much tell about – all my life anyway. And we'll see. I guess it remains to be seen.
[Crosstalk]
John Sviokla: The other thing, Dan, we're seeing it in private equity, because private equity – we're serving a number of private-equity firms, and private equity really looks at generative AI and AI, I think, as something to help with growth but also to radically decrease labor. And when you create a – when you create innovation there's only three places the value can go. It can go to the customer in a lower price, like with Craigslist. Most of the stuff's free. It can go to the labor and it can go to the investor in a higher return. Or it can go to the labor in either training or higher comp.
Private equity, of course, gives it almost all to the investor and the customer and very little to the labor, except at the top of the house. Traditionally. And so, what we're seeing is that there's a decoupling between hiring labor and revenue growth. And so, that's the premium. So, you look at somebody like Cursor, the new AI startup focused on software creation. They are at $5 million per employee in terms of revenue. OpenAI is over $3 million. So, we're going to see massive revenue growth separate from labor growth.
Dan Ferris: Yeah, I'm glad you brought up labor because that, of course, is an enormous part of the narrative from all kinds of people, from the – what was his name, Hinton? The Godfather of AI?
John Sviokla: Yeah, Geoffrey Hinton.
Dan Ferris: From everybody, they're saying it's going to be apocalyptic. It'll put millions of people out of work. Historically, of course, that has been predicted again and again, and generally transformative new technologies have created whole new industries, multiple new industries sometimes. Where are you on this?
John Sviokla: Well, first of all, we have to recognize that I think through – two things. The lack of enforcing antitrust, which I think has been a bad thing for competition. I'm a real capitalist. I think you need to enforce antitrust. And the Google thing, that they let them keep the browser, I think, is nuts. OK. Yes. And so, I think there's that.
The second thing is if you look at the percentage of profit that's going to labor, it's decreased by about 50% over the past 20 years. So, labor is reemployed but they're getting less of the value. So, I think it's really important when people say, "Hey, there's new jobs." Yeah, but those new jobs don't pay as well. And that's absolutely true. And the middle class has been going down in terms of standard of living and wealth for the past 40 years. So, I think that's an important undercurrent to remind folks.
The second thing is that we've had – I think we've had two great labor transformations in this country. One was the Industrial Revolution and the second was the Computer Revolution. In the Industrial Revolution – we don't teach this anymore, but it was incredibly violent. I live here in Chicago and riots down here, 12-year-olds working in factories, no benefits, nothing, 60-hour days. Right before it there was that – and that was bloody. People killing each other, the president sending the troops. The whole routine. So, we forget about that and we don't teach it anymore.
The second one was the Computer Revolution because you think about insurance companies, for example, literally employed thousands of accountants and bookkeepers and all that other stuff. All gone. But at the – and in World War II period, not only were we spending 12% of our federal budget on new innovations, like rockets and the Internet and stuff like that, so you had all that investment, which is now down to below 6%, so less than half in terms of forward-looking stuff. The other thing is we have massive investments in labor liquidity. What happened was they were worried as to over 10 million guys were coming back from Europe. They were afraid of political instability in the United States because they'd seen all this stuff. They'd seen communist and whole routine.
OK, so we had the GI Bill, which made for education. It also made for housing. The relative cost of upscaling yourself was trivial compared to today. You could do it for less than one year's savings from a job. Now, you basically have to save for 20 years to be able to afford a college education. You didn't have the massive health care risk. You didn't have both spouses working. So, labor liquidity was very high. "Hey, I want to go move out to Arizona to get a job. I used to live in the Bronx." That's a lot harder now because I have to get two – if I've got to have health care, I have to have two people working, the whole routine. So, labor liquidity is way – it's more like labor liquidity was in the Industrial Revolution. Not now. There's this great quote from William Levitt, the guy who did Levittown, the cheap houses outside New York. He said – they were worried about –
Corey McLaughlin: Oh, yeah, near where I grew up.
John Sviokla: Yeah. They were worried about this issue of communism and political unrest. And Levitt's quote went something like this. This isn't exactly right. He said, "Hey, look, if a guy has a house, a mortgage payment, a car payment, three kids, a dog, and a wife, he's going to be too goddamn busy to be a communist."
[Laughter]
And there's a lot of truth in that.
Dan Ferris: There is.
John Sviokla: So, my worry is that we don't have those same – if anything, we're disinvesting in all those labor liquidity things, because again, I'm a capitalist. I want labor to be liquid. I want to be able to move around. And all the variables are much, much worse now. They're much more like the first revolution, not the second one.
Dan Ferris: And we have an openly socialist mayor of the biggest city in the country.
John Sviokla: Yeah, if you – if we really talk about socialism, he's not a socialist.
Dan Ferris: No?
John Sviokla: He's not going to try to grab hold of the – we have a tremendous amount of socialism. We have – the military industrial complex is largely socialized. Cost plus business, very little bidding. We have that. We have the government which buys most of the health care. And so, we already have – $12 billion is going to the soybean farmers. Farming is socialist in this country. Water is socialist in this country. Roads are socialist in this country. Those are the real socialist things. He's not going to touch any of that stuff. And he's not going to – he's not also not going to have state-owned means of production. So, I don't know why he calls himself a socialist. If you know anything about socialism, he's not one.
Dan Ferris: OK, so to him socialism looks more free stuff, is – he wants – well, state-owned grocery store or city-owned –
[Crosstalk]
Corey McLaughlin: City-owned grocery stores.
John Sviokla: Yeah, you think that's going to make a dent? Come on. State-owned grocery stores compared to Target, Walmart, and Kroger? Forget it. Yeah, I just –
Dan Ferris: Yeah. No, I don't think it's going to succeed. I'm just –
John Sviokla: No, no. But he's not saying – a real socialist would say, "Hey, look, we need to take over Google" because Google is absolutely an essential facility. It's got no governance except – that's what a socialist would do. What did socialists do in England? They took over the coal mining. This – people, they threw "our socialism" and "communism" and they don't even know what the heck they're talking about.
Dan Ferris: Well, it's a – yeah, I'm aware that it's a sales pitch, but so is conservatism and progressivism and the others. They're all a sales pitch to gain control and get power and stuff. So, to me, they're all the –
Corey McLaughlin: Yeah. I think that going back to the – why the messaging is appealing though, say, to New Yorkers –
Dan Ferris: That's the point.
Corey McLaughlin: – is kind of what you're saying, John, about just the situation of people with labor. People trying to keep up and afford things, afford to live, basically.
John Sviokla: There's no question that the majority of labor is getting less of the economic value out of our society. Forget the wealth effect. That too. The other thing – I think people totally misread one of the core things, which is that it's not wealth differential – it's risk differential. So, the average worker today has longevity risk. They don't have enough money. They have health care risk, which was never – look, one of my kids has [multiple sclerosis ("MS"], and the cost of his MS medicine is $230,000 a year. So, yeah, this personalized medicine stuff and everything, it's expensive. And so, there's that. There's education risk. The fact that kids, that they have – I really believe in bankruptcy, because bankruptcy is a critical part of capitalism. The fact that student loans survive bankruptcy, I think – forget it. Look, if you're a bank and you give some 20-year-old a quarter million bucks, it's your friggin' problem. It's not my problem and it's not his problem. That's ridiculous. We have – I think of that as socialism. What the heck? Why are we supporting the banks?
Dan Ferris: Well, it is. Well, it's –
John Sviokla: Huh?
Dan Ferris: Yeah, they're – they don't have – the bank doesn't have any risk. The loan is guaranteed.
John Sviokla: It's ridiculous. That's – let's go back to capitalism. You loan somebody something, you go up risk. That's how you make the money.
Dan Ferris: Right.
John Sviokla: Anyway, so those risks, longevity risk, health risk, –
Dan Ferris: Education.
John Sviokla: – education risk, those are the things that people wake up at night about.
Corey McLaughlin: Right.
Dan Ferris: I know I do, And I've got a great situation.
John Sviokla: Absolutely.
Corey McLaughlin: Me too. Yeah.
John Sviokla: Yeah. I did an age calculator. I'm going to live to 90 according to age calculator. OK. And I've probably got a 1-in-3 chance of having a mental disease, to have some kind of dementia or some sh** like that. Well, that gets expensive. In the old days I'd be smoking pot, I'd be drinking booze, and I'd die at a reasonable 75.
[Laughter]
Dan Ferris: Right. Right. I'll send you a case of cigarettes, John.
John Sviokla: Well, actually, what the Chinese –
Dan Ferris: Let me know when the dementia kicks in.
John Sviokla: No, but that whole thing, –
Corey McLaughlin: I can send you some booze as well. what did you know when the dementia
John Sviokla: – we laugh, but the French, as they put in their antismoking campaigns, the smoking consumption went like this, the body mass index went this. I talked to some folks at a large industrial company I don't want to disclose. What a lot of people know is smoking cessation actually increased their health care cost, because people, instead of dying of lung cancer fast, they die of diabetes slow. So, smoking cessation is actually bad for health care costs because people still put stuff in their mouth but the thing they're putting in their mouth is food.
Dan Ferris: I've never heard this, –
Corey McLaughlin: Me either.
Dan Ferris: – but it makes perfect sense, doesn't it?
Corey McLaughlin: Fascinating. Yes.
John Sviokla: Yes. It's crazy.
Dan Ferris: And I have to tell you, my wife decided just to stop drinking. She got some values on a liver test a couple of years ago and she said, "I don't like that." And she just stopped on a dime because it wasn't the most important thing in the world to her. And her son did the same thing a year ago. And they both report exactly what you're saying. They're both all practically candy-aholics at this point. They look great, actually. They're both in better shape than ever but they've definitely substituted the candy. And I could see – in their case, they're cool, but I could see it getting way out of hand with a lot of people. So, it makes sense.
John Sviokla: Yeah. No, it's nutty. Anyway, so I think those – the risk differential is at least as important to me as the wealth differential.
Dan Ferris: Right. So, we got into this by talking about labor. And Musk is – Elon Musk, his famous quote is in however many years nobody will have a job and everybody will get everything they need or something that. But that's a very – that's a huge risk, to say the end of scarcity or extinction. I don't know if you saw that chart in the Financial Times. Actually the Dallas Federal Reserve put out the chart. And it was like extinction on one end and then the end of scarcity, meaning we're all just fine, on the other end. This – a wide range is a lot. It means high risk. Treasury bills have a small range of outcomes. Mining stocks have a big range of outcomes. They're much riskier, less – you get it. And AI from that perspective looks extremely risky to me.
John Sviokla: Yeah. Look, I think –
Dan Ferris: And I think labor is in the – that's in the crosshairs. Anyway.
John Sviokla: Yeah, I think, look – I'm a real believer in absolute wealth, not relative wealth. And so, I think a lot of wealthy people would rather have relative wealth than absolute wealth. And what I mean by that is that if you take a small amount of the profits that are now going to capital and you reinvest those in forward-looking stuff like we used to, like the – pretty much every major invention that we – that made great companies were started by the government or by academic institutions. So, the Internet, GPS, all the drugs we use, modern farming, all of it began with research that was done by the government or by academic institutions or what I call gift culture institutions. All of it, all that stuff. Look, Elon Musk, you and I have invested in Elon Musk at least twice, probably three times. First, we invested in all the technology he used to build his cars, the battery technology and all that stuff. Second, we gave him a direct loan when he was about to go bankrupt. Third, the NASA contracts kept Tesla and SpaceX afloat, so that's our tax dollars. So, he is a socialist leader that happens to have – socialism has saved him. Right?
Dan Ferris: Yeah.
John Sviokla: And so, the fundamental science is almost universally built by the government. We're disinvesting on those on a – and the reason – so, that's one thing.
The second thing is take a little bit and reinvest in the labor the way we did with the GI Bill. That makes my capital more productive because those people are building stuff that's productive and we're winning on a global basis, so they're getting economic rents from the rest of the world. And then I have places to put my capital because those people are now consumers. This current approach is weakening both of those, which means that my capital in the future will be chasing returns more and more.
The question is how do I chase those returns? Well, if I don't have growing productivity, the way I chase returns is to seek scarcity. So, I get hold of water, I get hold of oil or stuff like that if productivity is not happening. And so, I want the world to be like Henry Ford. Henry Ford doubled the living wage of his people from two and a half bucks to five bucks for two reasons. He wanted to scale the market on better talent and he wanted to have a consumer there. So, my capital is more productive if I have a healthy middle class that's consuming and being more productive. So, I'm going to make more – that's why absolute wealth. Now, if I were relative wealth, I can go back to 1066 and sit in the Tower of London freezing my butt off in a dead animal, but I have three dead deer in the closet that nobody else has, so I'm the wealthiest guy there.
Dan Ferris: I like the absolute thing better.
John Sviokla: I like the absolute thing better, too.
Corey McLaughlin: That would be better.
[Crosstalk]
Corey McLaughlin: The deer outside aren't very happy with that comment, by the way, that I've got out here.
John Sviokla: Hey, by the way, what do you think the main course was at the first Thanksgiving? And it's not turkey.
Dan Ferris: Oh, not turkey? What was it? I don't know.
John Sviokla: It was deer. Venison.
Dan Ferris: It was deer. OK.
John Sviokla: Yeah.
Dan Ferris: Makes sense. All right.
John Sviokla: Yeah, but I think that – so, back to your question. Yes, if we do not invest in new science, what Ben Franklin did with the public library – there is no public library for AI right now. We need one. And more funding of fundamental research that open sources data, weights, models, availability. There's some. There's not enough in my opinion. The Chinese are actually doing much, much more of it. And this is really scary on a global basis because the Europeans now with our new aggressive, antagonistic stance with our allies, our traditional allies, what's happening is they are now building Chinese AI into their products. So, Daimler-Benz is using Qwen models.
And this is, I think, a big mistake because for the past 60, 80 years, we've exercised a tremendous amount of security and economic value by having our partners build in our telecommunications. I don't know if you remember, they pushed back against the Huawei switches, for example, when the Chinese were trying to penetrate the Europeans. We put pressure on them. "Don't buy Huawei. Buy American stuff." We've done it with our software. We've done it with our computer chips. What we're doing now is we're saying, "OK, that next generation, go ahead and build the Chinese models in. We don't care." I think that's a strategic issue, both from a productivity standpoint and from a national security standpoint. And we are actually making it super easy for the Chinese to infiltrate our traditional allies.
Dan Ferris: OK, I need you to flesh this out for me a little better. I'm not completely grasping it.
John Sviokla: Sure. Say I'm Daimler-Benz, OK?
Dan Ferris: Right.
John Sviokla: If you look at – now, in AI, there's two big leaderboards. One is the proprietary model. You have to pay for it. OpenAI, that stuff. And then there's the open model: DeepSeek, Qwen, Z, and so forth. If you look at the leader board and you take the top 10, there's only one Chinese company that's in the top 10 of the paid models. There's only one American company in the top 10 of the open models. So, they're – no, they're – open is a little tricky. There's data, there's models, and there's the software – there's data, there's weights, and there's models. The Chinese are not opening source – open sourcing the data but they are doing the weights and the models. So, I can use – if I'm Daimler-Benz and I want some intelligence in my dashboard and I want to run a 70 billion parameter model, I can use the Chinese models, drop them right in there, and I don't have to pay IP back to the Chinese. If I do that – if I go with a couple of the American models, I do have to pay IP back to the Americans.
In addition, when the president is saying things like NATO's gone and all that other stuff, of course I'm going to go to the Chinese. Who else am I going to go to? I've got Mistral and I've got Aleph Alpha out of Germany. I've only got two models – and then I've got Falcon out of the [United Arab Emirates]. I've only got two or three options to do anything like a good model. I've got the Chinese who are going open and the Americans who are going closed. I don't think that's great strategically for us.
Corey McLaughlin: It's ironic, too, to hear that: U.S. closed, China open. Or – and –
Dan Ferris: Well, except that China, they have – the idea of intellectual property is just –
John Sviokla: Well, it's a joke.
Dan Ferris: It's non-existent, basically. So, it makes all the sense in the world.
John Sviokla: It's the same thing the Americans did. Moody Street in Waltham, Massachusetts, where the American Industrial Revolution started, is named after – it's the main street in Waltham, Massachusetts, where the Charles River drops three feet, so that's where they first put it and then they went over to Merrimac because it dropped 13 feet. Anyway – for more power. Paul Moody's claim to fame, he was an engineer, American engineer, and when the shuttlecock loom was making British weaving way more productive, if you left England with blueprints of the shuttle cock loom, you would be put to death.
Dan Ferris: Whoa.
John Sviokla: If you were caught. Legend has it that Moody memorized it and came back to the United States and put it in there, not paying any royalties back to the Brits. So, and the same with the books, Charles Dickens and all that other stuff. We stole all their stuff and just published it locally.
So, anyway, but we – our problem is that when we let the Chinese in, we let them – for market access, and I talked to a bunch of folks in the intelligence agencies. They would go to the big companies – and you can imagine who they are, aerospace, industrials and stuff like that – and they say, "Hey, the Chinese are stealing all your stuff." And the companies wouldn't raise a stink because they still wanted access to the Chinese markets. And I think that was a – I think that's where folks like the government should have stepped in and said, "No, no, no, no, no, you can't – because guess what? American consumers and American taxpayers built that IP. You can't simply give those decades of intellectual property to them for your incremental market access." Which is what they did.
So, let me use a specific example. I forget the name, but there's a Chinese drone that looks just the General Dynamics Predator, and at the weapons fairs they're selling it for one tenth the cost. Sure looks a lot like it. Looks a lot like it.
Dan Ferris: Yeah. Yeah, there's a lot of stuff –
Corey McLaughlin: And I saw that recently with the –
Dan Ferris: It looks like a lot of other stuff in the West.
Corey McLaughlin: Yeah. I saw that recently with, what, when the humanoid robots were having their moment a couple months ago and there was a Chinese one posted online through a Walmart affiliate site for three days before it got pulled down for no public reason.
John Sviokla: Yeah. But what you have to – oh, I'm sorry, Corey. I didn't mean to interrupt.
Corey McLaughlin: No, go ahead. Yeah.
John Sviokla: Yeah, I think that's true. I think that we have to update the narrative, though. They're not just copying it. You look at some of the stuff they did with the DeepSeek model, for example. They're advancing algorithmic innovation. And that's really important because the reason that these – that the ChatGPT and Gemini and so forth consume so much power and compute is because the transformer algorithm, which is underneath most of these things and gave us this unbelievable – I never anticipated the quality of the interaction you have with these things, even back in November 2022, when ChatGPT 3.5 hit the scene, is that they are computationally inefficient. Well, if somebody comes up with a really efficient algorithm, because the algorithm, the transformer algorithm brought us this step change in performance, if somebody comes up with a step change in performance on the compute side, and there are some folks who are using other kinds of models, well, guess what? Then primary demand for all that compute goes down. That's why, personally, I don't invest in the hardware stocks because I think algorithmic innovation can radically change the demand for hardware.
Dan Ferris: And it sort of needs to go down. If you do all – if you do the arithmetic on the power and what's needed to be built, it kind of can't be done very quickly. And if you look at, for example, the turbine makers, just natural gas turbine makers, it's all – that's the bottleneck. So –
John Sviokla: I've heard it's about a five-year wait right now. We're working with one of the folks who are building the largest data-center shell in the world and they have eight – they have five gigawatts of turbines lined up. And that's a huge part of their market valuation, is the fact that they have the contracts and they have the permits.
Dan Ferris: Now they just need the machines. They need the turbines. So, in other words, I'm just saying that it makes sense to me what you said, that you will need that greater efficiency, in other words. When – it made all the sense in the world for Nvidia's market cap to get hit the day DeepSeek came out with its big announcement. And there ought to – there well ought to be more such announcements in the near future, shouldn't there? This is – it seems an extremely important part of the equation for someone not to be saying, "Hmm, maybe if we – maybe we wouldn't have to build all these power plants if we just had better software basically, better algorithms."
John Sviokla: Sure. Yes, and there's certainly a lot of people working on that problem And the Chinese are working on it. I actually think it's good news that we're starting to sell them the Nvidia chips because that'll keep them fat and happy and they won't be so desperate to improve the algorithms, which I think is a good thing – net-net – for us.
I mean, I should I should share two points of view I have. First of all, I do believe even with all our faults that – I do believe in America, as in its value systems and its capabilities and its role in the world, and I'm kind of sad we're pulling back as we are. And the second thing is I'm a real capitalist, so I like competition. I'm sad that we've allowed these massive concentrations in so many industries. If you go back to the Sherman Antitrust Act, John Sherman, who was William Tecumseh Sherman's brother, the guy from the North who did the whole Sherman's March in Georgia and all that other stuff, in some of the words about that, there are two parts to the antitrust thing. One was consumer harm. And the other was companies having power as powerful as governments. And Sherman had this great quote. He said, "We just fought a war to not serve a king. Why would we allow a company to control an essential facility where we have to bow down to them just like a king?"
Now, the interpretation of that act has gotten rid of the market power piece and only kept consumer harm, largely due to the guys down the street here at University of Chicago. But I think these megascalers have way too much political power. And I think – and I don't think we should have a company owning most of the satellites up there, which we have right now, and not being underneath military control. Again, I believe in America and I believe in competition, so I don't think we should have people with monopolies on essential facilities.
Dan Ferris: Yeah. Let's talk about your – last time you were here, you introduced the four levels of generative-AI adoption. And I'm wondering how much that has changed, if at all, if you still use that framework, because I know that since last time we talked I went from what you would call a toe dipper, if that even, to someone who is trying very, very hard to be an intelligence leverager and to load as much of the best data I can into the best system that I can get – that a guy me can get on his laptop, I'll just put it that way, to make the – to do the best research I can on all the public companies that we write about. So, I really – I just can sense the power of it. I haven't really – I've learned all this – I've learned how to collect all the data I want and put it all in one place. Now I just need to learn how to prompt it and analyze it and get what I want out of it. So, it would seem to me you're probably in the business of teaching people how to be intelligence leveragers to a great degree. Yeah.
But I'm wondering something else, though, John, just before you tell me – talk about intelligence leveraging and what you guys do. What I'm wondering is since you last spoke with us a little over a year ago, have you heard the same experience that you just heard from me from lots of people?
John Sviokla: Yeah, absolutely. And a key differentiator is how – for the firms – because think of it, again, as these two populations. And we're seeing significant return on assets differentials in the intelligence leverager kind of folks, like return on assetss or 21%, 22%, 23% growth, much faster, things like that. The capability level is the key indicator of if you are going to get value from AI.
Dan Ferris: Capability level.
John Sviokla: Capability level. And so, to your question, yes, we absolutely are not only using that framework, we updated it; we have a little bit – slightly different language to make it easier to remember. We call it the RISE framework. So, there's research and experiment – and education. Then there's islands of innovation. Then there's scaling and synchronization. And then there's emergent intelligence. That's the intelligence leveragers. OK, so RISE. And we have yet to find a company that can jump straight to level three to go system-level stuff. We see more what happened with you, Dan, which is you experimented, you probably did some little innovations. Right now you're trying to think "OK, how can I really reinvent the way I do my work?" I don't want to put words in your mouth. But –
Dan Ferris: Every day that's all I'm concerned with now, is that I'm re-figuring out how to do my job all over again.
John Sviokla: Right. But if you had – I would say, and feel free to disagree, if you had started that at the very beginning before you had those other experiences, you wouldn't know what you were talking about.
Corey McLaughlin: Right. Yeah. That's been my –
Dan Ferris: Oh, sure. Yeah.
Corey McLaughlin: Yeah, that's been my experience, too. I originally I was playing around with all these tools a year or two – a year and a half ago and there's – you run into a certain level of friction. And now, you learn the – you learn what this model is. You learn the prompting, the importance of the prompting. I'm talking about for writing and editing purposes and research. And then now it's – to me, it's a matter of somehow working into my daily workflow, seems to be the biggest challenge. What's worth spending time on versus what's not?
John Sviokla: Exactly.
Corey McLaughlin: That's the biggest thing for me . And I also struggle to make decisions sometimes, so that doesn't help either. So, I don't know. Is that a common thing that you're trying to address as well?
John Sviokla: Yes, absolutely. And the kind of dirty little secret of executive education, and I'm a fellow at Harvard Business School and I taught there for over a dozen years and so forth, is that if you go to an executive education class at Harvard Business School or any good academic institution, a core skill when you're teaching executives is to teach them things that they should have known anyway but they can't admit they don't know in an unembarrassing way. And so, that's where AI is in most companies. People are like – because the window has closed to say, "AI? What the heck is that?" Could you slow this down and tell me what you're talking about? And that's how most executives really think, but they can't say it because it's not socially acceptable. "AI? What's that?"
If you go back to the basics, we can't really define "artificial" and we really can't define "intelligence." You put those things together – so, that's why this model is so important. You have to have hands-on experience. There's a great quote from Frank Zappa when he was – somebody was asking about a music critic. And he said, "Writing about music is like dancing about architecture."
Dan Ferris: Oh, dancing about architecture. Yeah.
John Sviokla: Yeah. And so, if you don't experience what it's to talk to a silicon intelligence who can literally talk to you in any language at any level of specificity, and now with some of the new models, especially Gemini 3, you can say, "Give me an illustration, give me a cartoon, give me a movie." If you have an experience talking with this silicon hive mind that has read literally everything that's been written, you don't know what that's like. And now we have all these low-code/no-code things. Anthropic just came out with – OpenAI? No. Anthropic just came out with an analysis that said inside their company almost a quarter, 23% of the low-code/no-code stuff they're doing, would never have been done because it was either too low down the priority for IT, or the person who understood the problem thought it was too much of a hassle to learn the technology.
So, back to what you were saying, Corey, people are using this stuff. Just you say, "OK, should I automate that? Do I do it enough?" And I think the ripple effect of that productivity at the individual level is going to be fantastic. But then it's like Special Forces. We used to compete with regular Army. In Special Forces, we cross-train, we upscale, we use – we take traditional technology and we modify it to make it better. And just like with Special Forces – regular Army might have taken a hundred people to do something – Special Forces might be able to do with six. And that's what's happening in organizations and to the organizations that are upscaling.
So, back to your question, Dan, it is absolutely central. Senior executives resonate with it. The other thing is it helps sort what the vendors are trying to sell you. So, there'll be – one of the big four went to a – and I don't want to use names but one of the big four went to a consumer brand that we all know and they came in and said, "Look, give us $5 million and we'll go find $25 million of system-level improvement." Complete and utter failure. Why? They tried to jump straight to level three. They tried to jump straight to system-level change. So, when they're talking to people, they don't – I mean, we're having this AI kabuki where we're saying stuff but we really don't understand each other. And then they have no hands-on experience. They haven't done islands of innovation. So, they don't know what it's good for and what it's bad for with any – even if they're not building it, you have to have – you have to drive a car to understand what's valuable in a car, even if you don't know how to build a car. And so, you have that. And so, they jump straight to system-level stuff, burn through the $5 million, total abysmal failure. And we see that happen again and again.
Dan Ferris: John, I wonder – I am not a company executive, so I'm not afraid to say I don't know. So, I don't know what low-code/no-code means.
John Sviokla: OK. That's basically – think about programming in English. "Hey, I would like you to go and look at this set of spreadsheets and then look at this set of objectives and then generate a six-page PowerPoint thing every Tuesday at 10 a.m. in the morning." And to say it like that, not to have to go into the language of all the different software that would generate that.
Dan Ferris: Ah. So, low-code/no-code sounds like people who know how to prompt AI well.
John Sviokla: Well, yes, but often prompting – yes, absolutely with a capital P: Prompt. So, if I'm prompting to build an artifact which then will do stuff for me as opposed to prompting in a dialogue to get an answer.
Dan Ferris: Ah, OK, I see. So, could – I see. So, not even – so, the PowerPoint is a good example, but it sounds like low-code/no-code creation of modules of code that then do things too.
John Sviokla: Exactly. Yeah.
Dan Ferris: OK, I see what we're talking about. OK.
John Sviokla: Look, we've got –
Dan Ferris: That's really cool.
John Sviokla: It's huge.
Dan Ferris: And that's I guess why Zuckerberg took out all his midlevel managers. This is what we're talking about, low-code/no-code. OK.
John Sviokla: Exactly. And so, 10 million people speak Java, 15 million people speak Python. Everybody speaks their own language. So, that's the thing. We've just taken – everybody can be a programmer.
Dan Ferris: Yeah. So, the WINS crowd may have a bit of an advantage here if they're really good with words and numbers.
John Sviokla: Absolutely. The WINS crowd –
Dan Ferris: So, it's not hopeless.
John Sviokla: No, no, no. But back to employment, we've already seen a tail-off of employment of new kids coming in. We see a tail-off of software programmers who have less than three years' experience. People ask me what their kids should do. I say two things. Well, sorry, three things. No, sorry, four things. The – I'll stop it four. No. First of all, if you want to learn a trade, that's a good thing because we're going to be short about a million folks, at least, of plumbers, carpenters, all that other stuff. If wealth differential continues, you look at the consumption of the top 1% to 10%, we've got a lot of Marie Antoinette going on. People building houses they don't need. And they're super specific. They're going to absorb all kinds of trades. So, that electrician that used to be building houses for the middle class and making, whatever, $20,000 a house is now building some 6,000-foot mansion and Tucson that they could use twice a year, but the electrical bill is $100,000. And so – and we're seeing that happen. So, there's a whole Marie Antoinette effect that's going to create primary demand. There's also just a raw need. So, if you want to go into the trades, go for it.
And by the way, the trades – this thing is going to help the trades phenomenally. '26 is going to be a year of mobile intelligence. And I don't know if you've done this, but just take this thing, and if you have any question, you're – I was looking at my water heater the other night and I was trying to figure out if it had a low-water shutoff. I just hold this friggin' thing up and I take a picture of it and it'll tell me if it's got a low-water shutoff. "How do I repair it? Where can I get somebody?" The whole routine. So, just imagine what that does for the trades. "What's this part? How do I do this? How do I do a hip roof?" This will tell you today, retail, with no – which is unbelievable. So, you have that. So, trades are going to go like crazy.
Second thing, mathematics, especially matrix math. Math is the lingua franca, is the universal language under all of this. So, if you understand mathematics, fantastic.
Third thing, sell. Anybody can sell stuff. As we go from Army to Special Teams and you get smaller, smaller groups, the premium and the value of people who can do demand generation is going to go up, not down because you still have to sell stuff to people and get them to commit.
And, and the last thing and perhaps most importantly is do what you're doing, Dan, and what you're doing, Corey, which is show me your posse. So, I don't think any company today should hire somebody who hasn't built a GPT, a Gem, low-code/no-code. Show me how you're using AI to automate your life. And I don't care if that's in work, your passion, family stuff. I don't care what it is. Show me. And I only want to hire those people who are doing that, because individuals are not – if I've got no experience doing software coding and I want to get to be – and I want to compete with the three-year folks, which is what the numbers show according to SignalFire, the venture-capital company, there's a massive fall-off on if you have less than three years' experience. How can I simulate three years of experience? I can come in with my posse. Oh, let me show you my code review thing. Let me show you my documentation robot. So, you're – people should be hiring posses of a human and a bunch of robots, not just humans.
Dan Ferris: I'm glad that you broached the topic of robots, because when I think of robots, I think of the physical ones in factories. And that has some impact in the trades, doesn't it?
John Sviokla: Yeah, it's going to increase primary demand for the trades because all those robots are going to need to be serviced.
Dan Ferris: OK, of course..
John Sviokla: And we'll have robots that will service robots, but they'll be a – it's just like when the – I was at a conference. It's kind of funny. I was at a conference in San Francisco when the self-driving thing just started taking off and people were saying, "Oh, what are we going to do with these 3 million truck drivers?" And I felt like saying, "What are we going to do about the 15 people I walked past on the street on the way here?" But anyway – let's solve that problem. Anyway.
It turns out actually, if you remember, what happened when – between when self-driving started, which was brought to you by the government – the Defense Advanced Research Projects Agency did the Mojave Desert Challenge. And only after a number of teams, CMU and a couple others, successfully did that under government funding, that's when the – only after they've done that translational research, that's when Brin called a guy named Chris Urmson from CMU and started the Google car project, which was the first among the self-driving car projects. So, all that – Waymo and all that other stuff, again, started – like the internet, started with government investment.
The – what happened, instead of three million truck drivers being out of work, is with the micro-targeting of Uber and micrologistics of Uber and my supermarket and everybody else, driver demand actually went up in the near term. Now, in the long term, as Waymo comes in – and the little Waymos, little non-car-sized stuff, will driver demand go down? Sure. But it's going to take a while. Same thing's going to happen with robotics. We're going to have massive adoption of robots, huge increase in the need for mechanical engineering and repair. Over time, will the robots build the robots? Sure. But it's going to take 20 years, just like it's taking 20 years for the driving thing to happen.
So, I think we have – it's just like the paperless office. I have a friend who made a boatload of money when Xerox announced the paperless office. He invested in paper companies because as soon as they had the laser printer, the first salvo of the paperless office was an explosion in paper consumption. Same thing happened with driving. Same thing is going to happen with robots.
Dan Ferris: I see. Do you use Waymo? Do you have Waymo in Chicago, John?
John Sviokla: The answer to the first question is yes. The answer to the question, the second question is no. I wish we did. I love Waymo. I was so impressed when I used Waymo. It's just – it's a totally different experience. And I was –
Dan Ferris: I've heard that universally from everyone.
Corey McLaughlin: Yeah, I've heard that from every single person, yeah.
John Sviokla: I was staying in a club on the top of Nob Hill. And so, we got in the thing and it went up over Nob Hill. There was an illegally parked 18-wheeler at the top in San Francisco. Then there were some people who were jaywalking. And then there was a car that was going around the illegally parked thing. It managed all that seamlessly. I was like "Wow. That was just –" I just happened to go that way. My first Waymo ride, it was like, "I am impressed."
Dan Ferris: Wow. Yeah.
Corey McLaughlin: Yeah. Just –
Dan Ferris: Yeah, how long before – go ahead, Corey.
Corey McLaughlin: Yeah, how long before Waymos are – they're coming to Baltimore, Philly, and St. Louis now. Pittsburgh too, I saw. So, we'll see. Which brings me to one of the things I wanted to ask you. Well, you already answered – you've said so many fascinating things already This has been great. And one of the things I wanted to ask you was what for – what skills for somebody trying to learn all this? And you already said that. So, I'm also interested – our crowd is also interested in attractive investments or sectors as this whole – as the whole story plays out. You already mentioned you aren't really – you're not into the hardware companies. Are there any areas that you're particularly optimistic about or sectors or that sort of thing?
John Sviokla: Sure. Yeah. Yeah, absolutely. Well, first I think in the AI world, the hyperscalers I like are the ones that have massive productivity enhancement through software creation that improves an already great model. So, who's that? That's Amazon, Google, Meta, and Microsoft. Apple should but we've seen no evidence of their ability to do it. Maybe they can pull it off with their version of the AI iPhone or whatever it's going to be. But – so – and then of those, you have to remember Google is the only one that's vertically integrated in the whole AI stack from the customer all the way down to the silicon. So, they have their [tensor processing units], and I think that's going to be a huge advantage because the performance characteristics of delivering instantaneous intelligence across any language in any location is a massive engineering challenge. Just the latency, the response time. For you not to proceed, for example – ElevenLabs has got this new thing out where it can real-time listen to the three of us real time, instantaneously transcribe, and then re-broadcast in 20 languages with 150-millisecond delay. The reason 150 milliseconds is important is because that's the threshold for perceiving a delay in conversation. So, you have to engineer the living daylights out of that. And Google is amazing at engineering.
Something like Nvidia, I'm not saying they're not worth it, it's just so many things have to go right. And I look at the hyperscalers – these idiots who say stuff like "Oh –" it drives me nuts – about the AI bubble. First of all, let's separate two things. Is there over-investment, overpricing on certain things? Of course. It's a market. On the other side, is there an AI bubble on adoption? Absolutely not. Adoption is accelerating. So, pricing is different than adoption.
Dan Ferris: Sure. Sure.
John Sviokla: Then the second thing is: Will everybody make money? Of course not. That's why it's called investing. We sit here – you and me and you, Corey, Dan, the three of us make three different bets going forward. The market can't support all three at the level we're hoping it'll get to. So, you guys pick the right ones, I pick the wrong one. Of course, I'm going to lose. That's the nature of investing before the fact. And so, that's nutty.
And last but not least – so, of course, I can sit here like Nouriel Roubini and say, "Oh, yeah, stuff's going to go down in the future." Yeah, no joke. The sun's gonna come up, too. Give me a break. But the most important thing is if you want the clearest use case for productivity with AI bar none in adoption: software. Software integration. Forty percent of Microsoft's cost basis software, if I can take that number down by 12%, 30%, or 40%, 30% improvement, that is a massive increase in Microsoft's market cap. They can spend billions – they generate just under $2 billion of EBITDA a week. They can spend billions of dollars on this. And if they never sell any Copilot to anybody, it's going to be worth it to them.
So, people are – now, OpenAI, I personally wouldn't have the courage to invest in OpenAI at the valuations they're talking about because they don't have $2 billion of EBITDA a week coming in. And they don't have a 30%, 40% cost base that they can shrink. So, I would say that. So, that's that.
The second thing is, I would look carefully at these new companies that are that are doing vertical wrappers and implementation. So, you take a company like Harvey in the law. OK. So, Harvey is now adopted by a number of law firms. I think – my oldest boy is an attorney. He's a prosecuting attorney in Chicago. The good news is he's going to still be employed because Chicago's not going to get any less crime-ridden. And so, he's going to have a job as long as the state doesn't go bankrupt, which it could also do. Anyway.
But if you're working for some corporate law firm working on deals and stuff like that, they're going to have half a third, a quarter of the people. And certainly if you're working in a – because a lot of the – when we think about it, the law is essentially a badly designed large language model. Kind of. And so, the – you look at something like Harvey, they've not only succeeded in getting good market share but now they're learning how to really tune those models. And they're having a whole longitudinal thing of data. Just like Facebook has my whole longitudinal thing in my social network, they're doing that in different vertical domains. I think those are areas that are very promising. And so, there's that.
The second thing, if you have attackers who are coming in underneath, and we like to think of ourselves as one in the research area, that if you – if we can prove that we can scale, I think folks us coming in with the Amazon model when they're competing against Kohl's and Macy's in any given industry – and those are going to start to appear – they usually appear about five to seven years in for the retail customer if you're not in venture. So, for example, the financial disruption usually takes somewhere between four and seven years. So, if you look at, for example, the price of New York taxi medallions five years after Uber entered the – after Uber already entered, not only being a company but had entered the New York market, the price of their taxi medallions was between $1.2 million and $1.3 million per medallion. The sixth year, it was $35,000.00. Now, they've recovered to about $200,000 to $300,000. They're never going back up to $1.2 million.
So, financial disruption, my ex-colleague and God rest his soul, Clay Christensen, talked about disruption but he missed financial disruption. He got operational disruption right, like you're going to buy from a mini mill, not from an integrated steel mill. That thing. But the financial disruption, that's what's happening to Gartner right now. They're getting financial disruption. Unless they can convince the market that they can get – that they can do the new model, they're never going to recover. And so, if you invest in those companies that are coming up underneath the Ubers of the world that have the new model in professional services, in science and discovery – what a lot of people don't understand is we're going to have an explosion of science like we've never had before. The ability of these models – if you think about science, think about how you improve stuff. And I call this stuff praxis, practical knowledge. So, let's separate knowledge. And I know – I've read enough philosophy to know that this is grotesquely oversimplified. So, to anybody who actually knows what they're talking about, I apologize in advance. But let's keep it for us.
Two hunks of knowledge. There's what I call praxis, which is a provable thing that regardless of what your belief system is works. So, Mike Johnson is a flat Earth guy. And – I mean, he's a young Earth guy. He thinks the Earth was started 6,000 years ago and stuff like that. The Speaker of the House. He uses technology that his belief system would never create. His belief system would never get you a cell phone, would never get you an antibiotic, the whole routine because he doesn't believe in science. So, he never – he – I don't think he should be able to use all that other stuff, but that's another thing. The – I think if your belief system would never discover the things you use, you shouldn't be able to use them. But anyway. So, if he wants to go be Amish or something , that's OK. Go for it.
But the – so, what's happened – so, praxis is provable. We can't argue about the fact that if you have this bacterium and I put this penicillin on it, it's going to be dead at this time. We can't argue about the fact that if I design a nuclear weapon like this and I do this kind of explosion, it's going to create this kind of bomb. That's a very specific kind of knowledge. It doesn't mean – it doesn't matter what you believe. It's true. And we can repeatedly do it.
That – and then there's all the other stuff like political beliefs and religious beliefs. Those are all socially constructed [inaudible]. And back to Mike Johnson, his phone works even though his belief system doesn't believe it should be there, because nothing about – anyway. You know what I mean? So, what's beautiful about AI and the math underneath AI, the matrix algebra, is that it is incredibly good at creating praxis because I can say, "Here's what you need to look for as an outcome to prove that it's right." So, what's an example of this. Something like "Will this molecule – for example, lidocaine –" lidocaine, a completely commoditized molecule. It turns out at Google they were looking at – they were creating a system to help think about new molecules that might be relevant to different disease states. And they look at them functionally and they look at the trials and they look at all the data they can get to say, "OK, here's something we haven't thought of that might be useful."
Lidocaine, it turns out, is very good for certain kinds of breast cancer. Now, nobody even thought to look at lidocaine. And it doesn't reverse it but it does stop it. That's only possible because the AI allowed for a robust description and an unbelievable search base where it could go look for stuff that no one – no scientist had ever thought of. It's just too big a space. So, that's a perfect example of the creation of praxis because now I've got rational reasons. I'm looking. I've got an objective function. I can say, "Does it help stop cancer?" And then "What's the mechanism of action?" And then – like that .It turns out, for example, in drugs, we believe that we've maybe, maybe, explored 1% of the useful compounds to make humans more healthy. Maybe one 1%. In material science, it's less than 1%.
Dan Ferris: OK, I see what you're saying. This is Thomas Kuhn, Structure of Scientific Revolutions, not on steroids but the Ferrari, the rocket ship.
John Sviokla: Acid, ayahuasca, whatever. Yeah, it's like – yeah. And the thing is that all the models speak math, so they all can talk to each other. And –
Dan Ferris: So, you'll find all these random things that would take a long time, that nobody would ever think of, some outlier student that nobody listens to might come up with it. Now, you just describe a broad outcome and allow the model to search with all of the data that you could ever think, and then it puts those things together that would have taken –
Corey McLaughlin: It sounds like a treasure – yeah, it sounds like a treasure map to give a – yeah.
Dan Ferris: So, yeah, therefore, scientific revolution inbound. That's right.
John Sviokla: Yeah. And you're also articulating – Demis Hassabis, the guy who won the Nobel Prize for AlphaFold, I mean, his passion in life is he wants to create a silicon version of the cell. A complete simulation of the cell. And that's – AlphaFold is on the way there. So, that's the kind of stuff – science is going to go wild. I worry that at the same time we're going to have a fragmentation of consensus reality, like, what, 40% of Republicans believe in QAnon and that there's – whatever. There's some crazy sh** out there that people believe. And I think that's increasing. And so, we're – I think we're fragmenting consensus reality but we're increasing scientific knowledge. That's going to be a weird tension. You can have more people who believe in stuff that would never have the power – would never have the science that they have in their hands, like their cell phone, but those same people are going to have more influence. So, it's going to be weird.
Dan Ferris: That is weird. This is a perfect time to ask our final question.
John Sviokla: Sure.
Dan Ferris: This has been great, by the way. I could do another three hours of this. I said in the beginning – I said, "John's a great talker. He's a great guest." And the proof is now recorded for an hour. So, the question is the same for every guest, no matter what the topic, even if it's a non-financial topic, identical question. If you've already said the answer, feel free to repeat it. The question is simply for our listener to – can you provide him with a single takeaway, a single thought today? What would that thought be if you could do that?
John Sviokla: Absolutely. Ask the robot.
Dan Ferris: Ask the robot. All right. I'm writing this down.
John Sviokla: No matter what you're doing. No matter what you're doing. You're planning a meal. You're going shopping. You want to advise your kid. You feel sad. You want to study nuclear physics. Ask the robot.
Dan Ferris: All right. That is certainly one of the most concise answers to that question we've ever gotten. And we do it all the time in my house. We're constantly having conversations with our phone and my wife is taking pictures and saying, "How do I fix this?" and all kinds of stuff. So, it's – we're there. We're there.
John Sviokla: Dan, can I say one last thing?
Dan Ferris: Of course.
John Sviokla: If we get the investment in science back to where it used to be – and DOGE did tremendous damage. Why he did that stuff, I don't know, but he cut off – he cut stuff that was similar to the stuff that he built his business on. I just don't get that. Anyway. So, if we can get back to the investment in science, if we can get back to the brain drain, which I think is critical to the United States success, you look at Musk, Peter Thiel, David Sacks, Sergey Brin, the guy who started Selectron, these are all immigrants who came here to the United States and got capital. And if we keep that going – I want to keep importing geniuses from the world and giving them capital and letting them create jobs. We've got to get that right. Got to get science right.
If we do that – and then, if we think about what's the public library, what's the public education for AI, that will – and the Chinese, by the way, are doing this intensively. They're teaching a whole population how to use AI, which is really scary for us, I think, because there's no question that – and I can show you good data that shows an individual with AI can outperform a team. A team of AI outperforms everybody. There's a big study at Proctor & Gamble that showed this definitively. Anyway, there's that.
If we get that right, we are going to have a level of innovation entrepreneurship we haven't seen since at least the Industrial Revolution. Why is that the case? You have all this expertise. Capital is more efficient. You can – an individual can do the work of many and a team can do the work of hundreds or thousands. I mean – and there's so many unmet needs in the world. We are just going to see an explosion of entrepreneurship but we have to have enough people who understand it so we don't end up with people wanting wealth transfer instead of wealth creation. But we have to – the people with means and the people in government need to understand that this is a GI Bill moment. We have to think about how we skill the population, how we continue to suck the brains out of the entire world and have them come here and do great stuff. And keep capital formation going and enforce antitrust. Have some competition.
Those to me, we will – it is an American century if – if – we do those things. If we push out, if we scrub bankruptcy with things like student loans, if we let a few monopolists control the core of AI, if we don't let the geniuses into this country, if – and we don't promote enough knowledge of AI so that people don't fight it but they use it, then we're going to have another whole problem. So, I think we're at a fork in the road where it could be greatness or it could be ugliness.
Dan Ferris: All right. I'm glad we got an extra thought out of you. So, thanks for that and thanks for being here, John. It was great to hear from you again.
John Sviokla: It's great to be with you, Dan. Corey. Take care, man. And just let me know if you know anything else. I love working with you guys.
Dan Ferris: You bet. You'll be hearing from us.
Corey McLaughlin: Thank you.
Dan Ferris: I told you he was a good talker, didn't I?
Corey McLaughlin: Sure is.
Dan Ferris: John's a great guest. You can just –
Corey McLaughlin: He's awesome.
Dan Ferris: Yeah, we start him up and let him go and he tells us everything we'd ever want to know and then some. It's great.
Corey McLaughlin: I've taken – I took so many notes during that, I'm going to have to download – maybe – I might have to listen to this myself again when it comes out just to get up to speed on everything that he said. I's just – it's all fascinating. And he – but he brings some practical takeaways from it. I loved his – the four things for – what would he tells parents when they say, "What should my kid do?" The trades, math, marketing, selling essentially, and then just kind of like ingenuity, experimenting with these things. But so much there. That was awesome.
Dan Ferris: It was. It was. And we will definitely have him back again at some point. And I just want everyone to know I'm not going to debate people on things like antitrust and government involvement in everything. People know where I stand on this, but it's really not a part of the show. So, anybody who knows me and didn't hear me push back on some of these things, that's just not a part of the show. But hearing from people like John who know a lot about what people are doing with the most transformational technology yet is really, really important to us. And that was great. It was another great interview and another episode of the Stansberry Investor Hour. I hope you enjoyed it every bit as much as we really, truly did.
Announcer: Opinions expressed on this program are solely those of the contributor and do not necessarily reflect the opinions of Stansberry Research, its parent company, or affiliates.
[End of Audio]




