Portfolio Intelligence podcast: who are the future’s AI giants? From hyper-scalers to model specialists
What’s so different about artificial intelligence (AI) today than in the past? Stephen Freedman, Ph.D., CFA, FRM, joins podcast host John P. Bryson to discuss the current AI boom.
Steve, head of research and sustainability for equities investing at Pictet Asset Management, discusses the evolution of AI while focusing on the new ways that it’s being used today. He also considers AI’s positive disruptive potential, risks that people need to be aware of, and which types of companies and industries may benefit from or be hurt by AI the most. He also analyzes the role of Mainland China and other countries that may seek to harness the power of AI, the prospect of global regulation of AI, and how the technology might help financial professionals.
“The big difference is that until a year and a half, two years ago, most interactions with these language models were just not very satisfactory. It wasn't fooling anyone. Whereas now, we've reached a point where the level of interaction is sufficiently close to human interaction that this has gone mainstream. And so, it's been the beginning of a race.” —Stephen Freedman, Ph.D., CFA, FRM, Pictet Asset Management
About the Portfolio Intelligence podcast
The Portfolio Intelligence podcast features interviews with asset allocation experts, portfolio construction specialists, and investment veterans from across John Hancock’s multimanager network. Hosted by John Bryson, head of investment consulting at John Hancock Investment Management, the dynamic discussion explores ideas advisors can use today to build their business while helping their clients pursue better investment outcomes.
Important disclosures
This podcast is being brought to you by John Hancock Investment Management Distributors LLC, member FINRA, SIPC. The views and opinions expressed in this podcast are those of the speakers, are subject to change as market and other conditions warrant, and do not constitute investment advice or a recommendation regarding any specific product or security. There is no guarantee that any investment strategy discussed will be successful or achieve any particular level of results. Any economic or market performance information is historical and is not indicative of future results, and no forecasts are guaranteed. Investing involves risks, including the potential loss of principal.
Transcript
John Bryson:
Hello, and welcome to the Portfolio Intelligence podcast. I'm your host, John Bryson, head of investment consulting and education savings here at John Hancock Investment Management. Artificial intelligence, or AI, is all over the news. ChatGPT, Bard, ChatSonic, and other products, that really no one's heard about a short time ago, are now practically becoming household names. This technology is promised to have a huge impact on how we work, how we live, and how we invest. To explore AI further, I've invited Steve Freedman, PhD, CFA, and FRM, to the Portfolio Intelligence podcast. Steve is the head of research and sustainability within the thematic equities team at Pictet Asset Management, a $250 billion global investment manager. Steve, welcome to the Portfolio Intelligence podcast.
Steve Freedman:
It's a pleasure to be with you on the call today, John.
John:
All right. AI is new to most of us, but it's been around for a little while. What's different now about AI than in the past? And can you walk us through some of the evolution, focusing on maybe the usage scenarios that we're currently seeing people utilize?
Steve:
Sure. I think that this all started with ChatGPT fourth quarter of last year, which is an example of what is called a large language model. This refers to a category of AI models that are advanced natural language processing models. And really, where the big difference relative to the past is that we've reached tipping points in terms of the ability of these models to really recognize completely unstructured questions from the user and to be able to provide answers that are really human-like in nature, and really to derive these answers from massive amounts of data, which they use for training purposes. And really, I think the big difference is that until a year and a half, two years ago, most interactions with these language models were just not very satisfactory. It wasn't fooling anyone.
Whereas now, we've reached a point where the level of interaction is sufficiently close to human interaction, that this has gone mainstream. And so it's been the beginning of a race. ChatGPT saw a shift from zero to 100 million users in two months. That's unheard of in terms of adoption of new technologies. And now we really have this race with competing products, and it's really a new era in terms of AI.
John:
What are some of the positive disruptive potentials for AI?
Steve:
Really, the key here is the ability to provide answers that rely on these huge data sets, and that can be focused on all kinds of different areas of expertise. You can basically get questions answered that are on all kinds of different subject matter areas, whether it's economics, whether it's cooking, you can have answers that are relatively advanced in terms of trying to code. Coders can now basically explain what they want to do and get some code, which is 80 to 90% of the way there. There's always a need to fine tune, but there's really a potential for massive efficiency gains across a variety of different industries. And at the moment, we're just scratching the surface in terms of the use cases. This is the very beginning of understanding the potential, but it's clear that this is something that's a technology that can be used across all industries in some way, and it will really force businesses to rethink how they operate and how they add value, focusing on human activity in the areas where that human activity adds the most value in a world where these tools exist.
John:
Okay. A lot of the conversation we hear in the press is around the risks of AI. We've heard Open AI CEO, Sam Altman, the company that created ChatGPT, he recently testified before Congress, and basically he asked Congress to regulate the technology because of its inherent risks. Even experts in this space are worried about the risks. What do you think of that? What are other things that people need to be aware of and implications for this generative AI?
Steve:
Well, I think it really has to do with the fact that this has grown exponentially from a base of zero a few months ago to now being in everybody's mouths. And so there are just a tremendous number of unanswered questions that we have to deal with. We don't really understand some of the emerging properties of these models. There is a sense that these models are only so good as the data sets that they're trained on. You could have biased answers if there's bias in the data that's used for training. You have some worrisome type of phenomena where the model is basically hallucinating, basically fabricating answers through inference or trying to provide an answer that is satisfactory to the user. There's problems with the fact that a lot of these answers are very much a black box. You don't really understand how the answer came to be unless you really at the source of creating these models.
And so I think there's, because of the speed of adoption here, there needs to be some guardrails. And even though many of the builders of these models are talking about creating some ethical principles around the use of these models, it's very clear that those guardrails are not being deployed at the same pace as the technology is being adopted and is finding diffusion across the economy. I think this is where the concerns come from. It's a matter of speed, of how fast this is spreading, versus the speed that guardrails need to first be developed. We need to figure out what they are. We can't just start regulating blindly or maybe, we'll create more harm than good. And so I think the calls by some participants to basically maybe take a deep breath and pause a bit and start thinking about the implications, I think they do have some merit.
John:
Okay. Yeah, that pause, we've heard that call from a number of folks, and it seems like it's not the technology is harmful if used properly, but make sure we're using it properly, understand the biases, understanding the gaps before we proceed much further because the speed which is it developing is rather concerning. Let's pivot a little bit here and talk about the types of firms that you think will benefit the most from AI. Let's start with the tech sector.
Steve:
Yeah, I think that the way this has been unfolding, it's the large tech companies, the so-called hyper scalers, the ones who basically own the cloud. They're the ones who are best positioned to benefit from this because they're basically the ones that are deploying the large models in the first place. And so I think what you're seeing is that there will be competition in terms of who has the best model, but ultimately, structurally, you will have some benefit accruing to those companies, which doesn't mean that there can't be opportunities for other players because the hyper scalers are going to be generalists in terms of providing solutions, and there's going to be space for specialized AI firms that have subject matter expertise that can basically improve the quality of the results in a very focused type of area, but there'll be space for them to really thrive. But typically they would be expected to rely on some of the infrastructure or the cloud infrastructure and the basic models that the hyper scalers are offering.
I think there's also opportunities in the semiconductor space, in particular for the companies that are really specialized in providing the computational power that will be needed to power the cloud. Even within the tech space, I think there are quite a few opportunities that will emerge from this new growth.
John:
Okay. Let's pivot outside of tech and talk about those industries that may benefit from AI and maybe which ones will suffer.
Steve:
Yeah, I think in terms of benefits, we're really talking about all kinds of business services that can introduce these technologies as really new tools to better serve their customers. This can be in terms of customer service, it can be in terms of data analytics. There are multiple applications where this can come into play. Another area that I think will benefit is private education. There's going to be a pretty important need to retool the workforce in many areas. We're just beginning to see where some human labor can be replaced, which doesn't mean that this necessarily has to create unemployment. It basically means that people need to focus on other parts of the value chain, and there will be a need for people to understand how to best interact with these technologies. And here, education services can be an interesting area to consider to facilitate that training.
I think in terms of losers, it's really, it's hard to really pinpoint at a particular industry. I think it's really going to be a matter within industries to distinguish between the ones that are able to adopt this new technology rapidly and effectively versus those that are laggards. It's a bit like what we were experiencing in the late nineties, early two thousands with the emergence of the internet. Ultimately over a period of 10 years, it became a technology that everybody had to use in some way, and some companies were just much better, much more forward looking in terms of being able to make it part of their business model. I think you'll see the same thing happen with these new AI applications, and it'll be really a matter of looking at each industry one by one.
John:
Yeah, that makes sense. The first mover advantage, whether it be the internet, social media, AI, that's where the real value is going to come from many of the companies. But let's talk about that, the value, because right now there's a lot of hype. Where does the value accrue for the individual companies in your opinion?
Steve:
I think ultimately, it's going to accrue to companies that are best able to integrate this into their value proposition. And so it could be at the product development level, it could be in terms of a much more effective marketing if you're able to crack the codes, or crack the code of the benefits these technologies offer. I think here too, it's going to depend really where these technologies are deployed, and it's going to be a question of where cost savings can be applied, whether it's cost savings in terms of different inputs, resources, whether it's cost savings in terms of labor costs. I think that this is really something which can impact the entire value chain for a typical manufacturing company. And ultimately it can be the basis of new business models in the service area that many of which we haven't thought of yet.
John:
All right. And we've talked about different sectors. Let's take it a little bit bigger picture here. AI's been driven a lot by US firms so far, but what's the role of China and other countries that may seek to harness this technology for its own purposes? If we have to regulate this, should it be regulated globally, and what are the things we need to consider?
Steve:
Clearly the US is in the lead here, and I think for AI, as for many other aspects of technology, the role of China is an interesting one because we're more and more going in the direction of a bifurcated world with competing platforms, competing business models, where the interaction between the two sides might grow more and more limited over time. And so even though China has to catch up, they're clearly aware of the need to do so and are in the process of deploying a lot of resources towards developing their own models. I think where things may get a little bit tricky is really what data they can use to train their models. And if there are more and more limitations on Western data that can be used in China, then this almost would lead to a situation where they're siloed off in many ways and maybe cannot provide services to Western clients as well as they would like to. I think that that's the direction we're going towards, not just with respect to these AI models, but really with respect to technology as a whole.
The question about regulating this on a global basis, there's not really any system in place that we can use to regulate these technologies on a global basis. That's something which requires international treaties, you need countries to be willing to have agreements with one another, and so it's not really clear what that type of global governance would be like. It's more likely that you'll see different systems, different regional blocks come up with the different solutions, and that's actually quite likely with respect to the preferences of the different governments. Clearly, the preferences are not the same between the US and China, and as a result, the governance is likely to lead to different places over time.
John:
Okay. Yeah. Well, I certainly maybe prefer to have a global governance structure, it certainly would be challenging. That makes sense. Lastly, Steve, as you know, the podcast here is directed towards investment professionals. How might AI help this audience?
Steve:
Well, I think it's a very powerful tool to basically understand matters much more easily. I think for education, self-education, for research of different topics, this becomes a way to access insights in a very efficient and rapid manner. I think you will see these tools be deployed in terms of investment, research and analytics. You already have, for example, Bloomberg with its version of GTP, which is trained on financial data. I'm not sure exactly how impactful that will be, because ultimately, like any type of technology that is applied to financial markets, once it's widely used, then it's in the price, it's in the markets, and it just means that you'll have an upgrade of the tools that are used. I think that that may be something which will happen regardless, but I think it's really more in terms of the interaction with information, fact finding that you will have a lot more powerful tools.
There are some activities in the financial landscape that will be automated much more easily, market commentary, things that are repetitive in nature and that don't necessarily require a high level expertise, that those things could quite easily be automated much more easily. Those are, I think, the things we see so far, and like in any other area, I think that there might be some big shifts that we're not able to anticipate at the moment, but they'll hit us with a vengeance at some point in the future.
John:
Very good. Well, it's an emerging and fascinating topic, and I feel like we've just scratched the surface. But Steve, I want to thank you for joining the Portfolio Intelligence podcast today to shed some light on this fascinating topic.
Steve:
Thank you, John. It's been a pleasure.
John:
Folks, if you want to hear more, please subscribe to the Portfolio Intelligence Podcast on iTunes or visit our website, jhinvestments.com, to read our viewpoints on macro trends, portfolio construction techniques, business building ideas, and much, much more. As always, thanks for listening to the show.