Listen: "We're entering into another leg of a continued industrial revolution which is going to be marked by collaboration between humans and machines," says Archie Foster, Managing Director and Head of Thematic Equities at Citi Investment Management. "This will include industrial automation, robotics, and AI," he adds.
In the latest episode of Living Beyond Borders, a podcast produced in partnership between GZERO and Citi Global Wealth Investments, Foster is joined by Dev Saxena, Director of Eurasia Group's Geo-technology Practice, to go beyond the hype surrounding generative AI and ChatGPT to understand how it can truly affect the economy and our political systems in the coming months.
While the fears about job losses may be overblown or premature, there is no question that the use of this technology is changing jobs and industries. As tech giants increasingly adopt AI to improve productivity, we'll look at the main challenges they face, as well as what regulators need to keep in mind as elections around the world continue to be susceptible to misinformation.
This episode is moderated by Shari Friedman, Eurasia Group’s Managing Director of Climate and Sustainability.
Shari Friedman
Managing Director of Climate and Sustainability, Eurasia Group
Archie Foster
Managing Director and Head of Thematic Equities, Citi Investment Management
Dev Saxena
Director of Geo-technology , Eurasia Group
Transcript: Season 4, Episode 7: How AI is changing our economy
Disclosure: The opinions expressed by Eurasia Group analysts in this podcast episode are their own, and may differ from those of Citigroup Inc and its affiliates.
Archie Foster: In my view, we're entering into another leg of a continued industrial revolution, which is going to be marked by collaboration between humans and machines. This will include industrial automation, robotics, and AI.
Dev Saxena: Just in the same way that there's been a tremendous uptick in the development and deployment of these technologies, in the last couple months and weeks actually, we've also seen an uptick in terms of policymaker and regulators' appetites in terms of signaling some of their direction.
Shari Friedman: Welcome to Living Beyond Borders, a podcast from Citi Global Wealth Investments and GZERO Media. On this program, we examine global risks and opportunities from the angles of both politics and economics. I'm Shari Friedman, managing director of Climate and Sustainability at Eurasia Group.
It's the technology Bill Gates called Revolutionary and compared to the game-changing inventions of the personal computer and the mobile phone, artificial intelligence, more specifically generative AI. By now you've probably heard of ChatGPT. It's been on nearly every news program and was even featured on the cover of Time Magazine. And there are many similar competitors hitting the market globally.
So what is generative AI? Well, let's ask ChatGPT.
I’m typing the question into the OpenAI program now. Here’s what it produced:
AI Voice: Generative AI is an exciting and rapidly evolving field that has the potential to revolutionize the way we create and interact with digital media. However, it also raises ethical concerns such as the potential for malicious actors to use these technologies to spread disinformation or manipulate public opinion.
All right, so what are the implications of the so-called age of AI and what does it mean for jobs, the economy, and even democracy itself? That's what we're here to talk about today. My guests are Archie Foster, Managing Director and Head of Thematic Equities at Citi Investment Management. Welcome Archie.
Archie Foster: Thanks for having me.
Shari Friedman: And Dev Saxena, Director of Geopolitics and Technology at Eurasia Group. Hey, Dev.
Dev Saxena: Hi, Shari. Great to be here. Looking forward to the conversation.
Shari Friedman: So Archie, first let me ask you about your title because it's relevant to this discussion. Define thematic equities for our listeners and how it applies here.
Archie Foster: From my perspective, it's really a focus on companies tied to longer term secular changes in how society lives and how companies operate. It captures changes in things like how we work, where we live, what we experience, how we experience things, and how we spend our time and so on. It's longer term in nature and it's different than investing based on economic cycles or short-term trends. Regarding its relevance to this discussion, it's very early days, but AI has the potential to eventually impact in varying degrees, almost everything we do from work to healthcare to mobility to entertainment and so on. So it's very, very relevant thematically.
Shari Friedman: So it sounds like you have a lot of overlap with what Eurasia Group does. You're looking at the general trends that are going to have material impact on your investments. So to both of you guys, there's this enormous amount of enthusiasm and also some trepidation about the rapid expansion of generative AI technologies. Is this a revolutionary moment or is it really too soon to tell?
Dev Saxena: So it's certainly a noteworthy moment, there's no doubt about that. Whether or not it's revolutionary, I think we'll only know in retrospect. But there's no arguing that the pace of commercial innovation that's been happening over the last couple months has been incredible in the generative AI space. Almost on a daily basis, we're seeing new product launches, new strategic partnerships, very large financing rounds for companies in this space. The biggest technology players in the world have released multiple products that feature generative AI. So those are firms like Google, Microsoft, AWS, NVIDIA. They're scaling it very, very quickly. And so the technology's becoming increasingly ubiquitous.
We're going to see in the next couple months that it'll be integrated into everyday products that we use, whether it's our emails or productivity tools like Word documents. And all of a sudden, I think we'll realize that we're using it just as any other technology that we use daily. And at the same time, there's also going to be, and currently is, a new wave of startups that will focus on deploying generative AI technologies at the enterprise level as some of these companies become more and more specialized. So we're going to start seeing this technology be pushed further and further into the market. And when we look back on it in retrospect, I think we will see this as a moment where there was an inflection point for AI and the use of AI in our everyday lives.
Archie Foster: The thing you got to think about is AI has actually been around for a while. It's been evolving for decades really. It started in World War II. What's changed is that previously AI could only really read and write and now it's evolved to being able to understand and create content as well. I think we have to be a little bit careful about some of the big assumptions that are being made, particularly around the degree of change and importantly sort of how soon changes are likely to happen. The hype machine is revving up pretty good at the moment. And we have seen similar movies before where the promise of parts of tech either sort of never arrives or takes a lot longer than people expect it to arrive.
But that being said, in the case of AI, things are moving very fast. Each subsequent model becomes more and more powerful by sort of a number of degrees. And with the shift in AI towards being able to understand and create, we seem to be at an inflection point in terms of what it can do. And given the degree of improvement in each iteration from here, I'd imagine that we've sort of crossed the Rubicon as it were.
It also to me seems that the path to monetization here may be shorter than other previous tech sea changes because something that Dev already mentioned, we're seeing the initial use cases as being additive to technology that's already available in the marketplace.
So to me, adoption and monetization is likely to be relatively fast. And that's not to say that fascinating and exciting new use cases aren't coming. They definitely are, but the fact of the matter is that there's a large amount of productivity that can be released by making current, already powerful technology tools more intuitive and useful to the layperson.
As an example, the average person using PowerPoint or Excel or Bloomberg is really just scratching the surface in terms of what these tools can actually do because they're hard to use. Generative AI makes it more simple. So to me there's a lot of low hanging fruit that can be harvested pretty quickly. So it's been evolving for some time, but it's at the point where it probably does become revolutionary, most certainly in what it's going to be able to do well into the future, but importantly, what it may be able to do now.
Shari Friedman: So let's unpack both these opportunities andd the challenges right now. Archie, the biggest concern most people have is jobs and how AI is going to affect the labor market both in the U.S. and globally. So what do you see as the short-term and long-term implications?
Archie Foster: First of all, there is a whole lot of hyperbole which is out there that I think we need to cut through a little bit. There are headlines screaming that AI is going to cause 300 million job losses and so on. I think that's a very simplistic way of looking at things and it also implies that we'll see mass job losses immediately, which I don't believe.
We've got a significant demographic issue in many areas of the world as the population ages. So we have to think about AI and what it can do in the context of this. I'd say that because of that, we're going to need some help in order to keep output going at the level that we need it. And that's on a global basis.
Labor scarcity is a real issue and I'm not sure it's just a cyclical or COVID phenomenon. So AI will help in this way. Also, all types of automation, including AI, is going to be needed to pick up the slack as the population ages. So in other words, we're going to need scarce labor resources doing more productive jobs. Second, technology has been "taking jobs" since the start of the First Industrial Revolution. Ask yourself, where did all the telephone operators go? Ask yourself where the painters and welders on the auto floor went? There's lots of examples of this. So this is a continuation of something that's been going on for a long time. And guess what? Job growth has continued through all of this. It's just that we've moved to different types of jobs.
So when you're looking at AI and you're looking at jobs, realize that we're not operating in a vacuum. There are lots of other things going on here. The difference this time is that this particular technology has the propensity to disrupt parts of the knowledge economy. It's not just manufacturing. And this I think has caught the attention of some people as it does seem like a bit of a shift. But even here I note that it's not unprecedented. The PC and internet ages ushered in a heck of a lot of change in white collar areas as well. So for instance, compare what the floor of the NYSE looked like 30 or 40 years ago versus what it looks like now, it's very different and there are less people there. In my view, we're entering into another leg of a continued industrial revolution, which is going to be marked by collaboration between humans and machines. This will include industrial automation, robotics, and AI.
So for the foreseeable future, I would envision much of AI being used collaboratively, sort of increasing the productive output of knowledge workers versus displacing them. There's a debate right now about whether AI is going to lead to companies doing the same amount of work with less people or doing more work with the same amount of people. And logically, I kind of come down on the latter.
That's not to say that there is not going to be displacement or disruption in parts of the labor force. There will be. There are going to be jobs that don't exist anymore. I'd imagine that jobs performing basic research and data analysis function will be in jeopardy. Things like credit analysis, insurance claims, media planning, customer onboarding, back office, and things like even textbook writing and call center employees, that's where you're going to see some disruption, absolutely. But I don't think it's a zero-sum game.
Dev Saxena: One important nuance to add to this discussion is particularly in the early phases of the deployment of a new technology, we won't see individual jobs being lost immediately. Where we will initially see the augmentation is around tasks. The modification of tasks as people start adopting new technologies like generative AI into their day-to-day workflows. As time passes and we understand what the implications of that or how workflows change, then you'll start seeing the nature of jobs change.
So we're kind of in the early days of this. And as these products get deployed, we'll start understanding where the initial use cases are, which tasks are being impacted, which jobs have the most tasks, which are impacted, and how that can impact wages and the value of particular jobs. And that'll be a function of where companies are trying their best to create use cases.
As the technology becomes more ubiquitous and there's more innovation in this space and it starts flowing across the economy more broadly, then we'll understand the broader technological impact on other sectors.
The other piece I think that's really important is this has been an area that's been a challenge for policymakers to solve in many other cases. So if we think about other major disruptive factors to economies, you can use the analogy of globalization as a major macro trend, which is a confluence of a number of different policies coming together both on the technological innovation side and on the trade side.
You could also talk about climate as people talk about climate adjustment to particular communities and jobs, but it'll be a challenge for policy makers to successfully and in a targeted way transition employees or people who have jobs that are impacted by this technology in a way where they develop a skillset that can then accommodate new technologies. I mean that is the challenge that policymakers face. They've done it with varying degrees and different geographies in the past, but I think it's an area that should be of more focus in trying to bring new ideas to the table in how we can properly transition some of the workers that are most impacted by these technologies.
Shari Friedman: Yeah. The economy is dynamic and I think it's very difficult to be able to predict how that's going to play out in a very dynamic economy. And Dev, you had mentioned that this is similar to other sectors. Back when I used to do macroeconomic modeling around climate change, it was very difficult to understand the impact of reducing carbon because you could only tell what the definite impact was going to be or the predictive impact was going to be on the sectors that we saw, but you couldn't tell what the new sectors were going to be or what the shifts of the balloon were going to be. As you squeezed one side of the balloon, where was that going to go?
Dev, you spend your days talking with companies across the economy: financial institutions, corporations. I'm curious to know what do you see the overall impact is to different pieces of the private sector? What do you say to corporations, or for that matter, any large organization about what the technology changes will mean for them?
Dev Saxena: Yes, this is a question that we're talking to our clients about almost on a daily basis. And so what we're seeing is from an opportunities' perspective, a smaller subset of companies are eager, they're willing to be first adopters and are trying to understand how they can deploy generative AI into their workflows or into their customer facing applications. Right now, the primary form of this technology being deployed is through chatbots. And so that kind of to some degree limits the extent to which this technology is initially being deployed. That'll change as we come up with more interesting and innovative use cases. But at the same time, there are a number of risks that companies are assessing which are to some degree preventing or at least slowing the adoption of this technology.
At the company specific level, we break those risks down into three buckets. There's operational risks that haven't been worked out yet, and some of those risks are associated with the technology being prone to making errors. Some are using the term hallucinating. But effectively, when you use these technologies, sometimes they can come up with just completely inaccurate and wrong responses in terms of the chatbot. And so to the degree that companies are open to operationalizing this technology, they also need to figure out a way to due diligence it and to make sure that the tasks that are incorporating this technology still have a quality control function associated with them. And so that's kind of being worked out right now.
There's also a set of court cases that are being worked through U.S. courts right now. So there's kind of some legal questions or some legal risks that needs to be kind of worked out a bit further. At the moment, the legal questions are around intellectual property and copyright because these technologies are ingesting and using tremendous amounts of data, the rights to that data, there's a bit of debate around the degree of compensation that some of the original data creators or content creators are entitled to. And so that's being worked out in courts. The developers of this technology at the moment, that burden is falling on them, but increasingly it's possible that some of the deployers of the technology, even if you're using an API to deploy a model that was developed by another company, you potentially could be liable as well. So some of our clients are kind of waiting to see how that will play out.
And then lastly, there are some reputational risks that are associated with using the technology in this current phase until it evolves a little bit further. So sometimes the outputs of these chatbots can be quite toxic, sexist, racist. There's been a number of news articles or news stories about particular negative outputs.
And so in situations where chatbots powered by generative AI and large language models are being put directly in front of customers, but with a white label branding of enterprises, there certainly is a risk element. And so again, I think we can expect that companies that are developing this technology will be putting more of a focus on quality control to try to address some of these risks.
But in the meantime, it's kind of incumbent on companies that are willing to adopt it to also start applying risk management frameworks. And so there are some well thought through risk management approaches that companies should be looking at to understand and start operationalizing and building out governance, quality control type of policies internally, both whether they're developing technology or just adopting it with respect to AI.
Shari Friedman: Archie, Dev just outlined pretty comprehensively a series of different risks that companies need to be looking at. And then you will take those risks and figure out what shifts need to be happening. And so what are the big opportunities that you are seeing economically? What sectors stand to gain the most and what does that look like?
Archie Foster: The productive capacity of the economy at large is going to benefit. There are lots of numbers getting thrown around that we have to sort of take with a grain of salt, but directionally are sort of all pointing in one direction, talking about the potential for one and a half percentage point increase in productivity growth over the next 10 years, which seems reasonable. So the benefits are expected to be pretty far-reaching eventually as this develops.
Immediately, the initial beneficiaries are those that build and train the models as well as companies providing the infrastructure and hardware used to build and train them. To take it a bit deeper in terms of building and training models, I see three sort of barriers to entry right now, which are cost, compute power, and data. Building and evolving an AI model is extremely expensive. It requires immense amounts of high-quality data as well as immense amounts of computing power.
So while I'd like to say I've discovered some sort of hidden gem in the public markets that is sort of the next big thing in AI, I haven't. In other words, at the moment, pure AI opportunities are really not there, at least in the public markets. I'm sure that in the private markets they're there and they're lurking. But for now, the ones with the deepest pockets, the ones with the access to the most data, the best data, access to the compute needed are the big tech platform companies. And they've also been working on AI for quite a long time.
Beyond the platform players, select companies that provide compute power and infrastructure to drive growth in AI are also likely to benefit. So here we're thinking about advanced semiconductors and cloud infrastructure providers. On the application layer, they will be sort of a wide-range in opportunity here using generative AI and deep learning to enhance all sorts of processes, all the way from sort of grading exams on one end to writing code and developing complex medicines on the other hand. But as we sort of noted, I think initially we're going to see the biggest boost from companies that can integrate AI into current products. So increasing the use and/or quality of output and providing immediate sort of productivity enhancement.
We've mentioned things before like data analysis software, marketing, search. I think search is interesting because I think that what this might do is really allow the promise of voice enabled search and voice assistance to actually come to the fore here a little bit and they should improve vastly and things like software development. So those are some areas. Finally, an area that I think is going to benefit greatly, and this is unfortunate, is cybersecurity. AI is likely to increase the ability of bad actors that may not be particularly tech-savvy to write malicious code or generate more sophisticated phishing schemes and the like. So both the number of instances of cybercrime and the sophistication of those instances is probably going to increase. So I'd imagine that the reliance on better and better sort of cybersecurity is likely to increase as well.
Shari Friedman: So Dev, that Time Magazine article that I cited in the intro referred to an AI arms race. It's long been believed that the US is way ahead of China on AI development. Is that still the case? And what does the geopolitical competition entail right now?
Dev Saxena: AI is certainly a technology of focus to both countries and it fits into the broader strategic competition in part because we know that AI can have a tremendous number of applications both on the economic side and will lead to a lot of wealth creation and value creation. But also as a dual use technology, it can also be applied to improve and optimize a number of national security defense and intelligence capabilities. And so for that reason, both China and the US have highlighted AI as a technology of focus and both countries are also allocating a large amount of resources to the development of technology, both from a skills perspective, from an R&D perspective, from a procurement and adoption perspective as well.
I think it's really important for us to remember that there's a number of different varieties of AI application. Generative AI is one subcategory, machine learning, computer vision. There are a number of kind of subcategories which can then be applied and adopted into commercial uses or security uses in different ways. And so I think from our perspective, it's actually quite competitive. There are areas where China is advanced, there are areas where the US is advanced. So I think that both countries will continue to focus on this technology and to the degree that they can create edges for themselves, at least that will certainly be the case. We know that they've outlined this in various policies.
A really well-known Chinese policy is the Made in China 2025, which outlined a desire to be a global leader in AI. Subsequently, the U.S. established National Security Commission on AI, which had a number of recommendations which were adopted both from a fiscal perspective and from a policy perspective. And more recently, there's been a pretty big push by the Biden administration, both from an offensive perspective in terms of developing domestic capabilities by industrial policy. So we think about the CHIPS Act as well as some very strategic targeted measures intended to limit Chinese access to some of the key ingredients required to develop AI at a large scale basis. And so we saw export controls on particular semiconductors on some of the key value chain supply chain components that are required to develop chips within the domestic country.
So we can anticipate that this will be a space where there'll be more investment from both countries and also more competitive gamesmanship and tactics to try to constrain development strategically going forward. But I think this'll be something that we'll certainly see in the short and medium term to continue.
Shari Friedman: Important both security wise and also economically. So there'll be that competition continuing. Archie, it's fair to say that the regulatory environment has not, in most cases with perhaps the EU being an exception, caught up to the realities of this new technology. How do you see regulation affecting the sector?
Archie Foster: In terms of AI oversight, we're really only barely getting started. And I think it's a bit ironic that the industry itself seems to be pushing the narrative. And it's not just those that might need to sort of catch up. As you might know, the CEO of OpenAI and the chief privacy officer from IBM were just in front of the House basically pushing them for oversight in some of the questions. And there's a reason for that. There's serious potential issues to contend with like disinformation, deepfakes, potential political bias and other biases and models, intellectual property theft, cyber crime, et cetera.
The question is, what does regulation actually look like? You don't want to regulate to the point of choking off what could be a very important sort of game-changing technology before it gets started, but there's also enough risk that something has to be done. The good news is AI is still relatively early and so you have a little time, but the bad news is it's moving fast and it's likely to be pretty far-reaching. And it's also complex. So there's a large learning curve that needs to be navigated by the lawmakers, but you just sort of hope that it doesn't take a bad issue to get everybody moving in with a proper sense of urgency.
That being said, there are some common sense rules that are being talked about. Mostly they're about disclosure and transparency regarding both the output of generative AI and the inputs and training. Basically, the thought is that sort of anything created in altered by AI should be sort of forced to be disclosed. So the end user knows, for instance, an image is not real. I'd say that's relatively important.
So you're drawing a line there between what's human generated and what is machine generated. Additionally, models need some sort of oversight or audit function. So some sort of disclosure and oversight needs to be done as to what the inputs are to the models to try and identify biases. So there are sort of common sense solutions being talked about, but it really needs to move into another gear as far as I'm concerned. And I'd also imagine that really this is sort of just the tip of the iceberg. As AI becomes more pervasive and powerful, regulations and safeguards are going to have to evolve, which again is a risk. I think there's a path to reasonable regulation that can provide sort of evolving guardrails that won't hamstring innovation. But the risk is that if we wait too long, if bad outcomes occur, that push sort of rush regulation that's not particularly well-thought-out. So it's early days and it could be a big issue. At the moment, as I said, it's barely getting started, but it really has to.
Shari Friedman: I mean, what I'm hearing you say is that it is at least technically possible, which is, I think opinions differ. Some people say, "Look, this is an unregulatable space."
Archie Foster: Yeah, I've used sort of the low hanging fruit analogy. And there is some low hanging fruit. I think some of these things are common sense. I think that we're in early days that we can tackle some of these things. The question I have or the challenge is going to be, I think as it becomes more pervasive and it can do more and more things, will regulation keep up?
Shari Friedman: Dev, taking this to a specific example, 2024 is a massive year for elections globally, including in the U.S. as we head into another presidential cycle. How big a concern is generative AI in terms of being a “weapon of mass disruption” as Eurasia Group has called it this year? And do you think it's possible to have regulations or anything that can curb this?
Dev Saxena: We do see the 2024 election both in the U.S. but in a number of major democracies globally as a sign posts for the impact on generative AI in terms of destabilizing democracies and potentially having impacts on geopolitics. And so we had kind of signaled this risk earlier in the year but our hypothesis was that the U.S., which has principally been an exporter of democracy historically, and U.S. technology, which has generally been a liberalizing force around the world or from a globalization perspective, now is a primary exporter of some of these technology tools which are undermining democracies. Whether or not that's intentional, that's certainly been a consequence if you think about some of the impact of social media platforms and deepfakes which leverage AI.
So it's very likely we think that generative AI will accelerate that trend in the sense that it provides low cost, high quality tools to some of the bad actors that Archie had mentioned. And so it'll likely accelerate disinformation and cybercrime. So the U.S. presidential elections, we've already seen a couple examples of generative AI powered images and videos being used in pretty sensitive political discussions and being picked up on social media and going viral. This will very likely also be the case during the election timeframe, both in the U.S. but also in other major democracies like India, Indonesia, Taiwan has an election, in the Eu there's a parliamentary election coming up. We may also have parliamentary elections in Canada and the UK. So I think countries we're already kind of struggling with how to police or regulate or mitigate these fundamental risks to democratic processes.
And unfortunately, this technology both in its application in that it replicates or creates very authoritative in some cases sounding or photorealistic videos and images, which can be viewed as being accurate, both it powers it and also it makes it quite ubiquitous, like it's quite easy to get access to these tools. In some cases, you just have to pay a pretty low cost or they're free. These types of technologies will scale this year and be well distributed in the years when these elections are coming up. There has not really been thoughtful or particularly effective processes which countries can use to try to mitigate that risk. So it'll be quite challenging. With respect to your question on the regulatory front, it's not that these technologies are unregulated because right now they actually are being regulated. It's a question about how effective the regulation is. And so just in the same way that there's been a tremendous uptick in the development and deployment of these technologies, in the last couple months and weeks actually, we've also seen an uptick in terms of policymaker and regulators' appetites in terms of signaling some of their direction. There's a couple different models that we're seeing that I think are important for some of our listeners to understand.
In the European Union, they're actually quite advanced in their development of legislation related to artificial intelligence. In that case, it's a comprehensive piece of legislation that focuses on particular use cases of technology. The challenge that they've faced, which is kind of perennial for all lawmakers, is that the technology sector moved so quickly. So they kind of were caught in a place where when generative AI went viral, it wasn't particularly framed into the existing legislation. And so they've had to quickly kind of try to figure out how to adopt it in. But we can anticipate that the EU legislation will be finalized this year or certainly next year.
Canada's also in the middle of putting through some comprehensive legislation on AI. And in the U.S. we've seen some interesting activity over the last couple weeks. U.S. regulators, for example, at the FTC, at the Department of Justice, have actually been very proactive recently to try to point out that existing laws can be used to try to mitigate some of the harms associated with AI. So if you think about what the DOJ can do on discrimination, what the FTC can do on deception and some other regulators like the CFPB on credit lending, AI and being used in credit, AI being used in hiring. So they've been quite keen to try to signal the markets that there is a police on the beat. How effective that is, we'll see when they bring forward cases. And then simultaneously, U.S. lawmakers in Congress have been more recently putting forward different frameworks. For example, Senate Majority Leader Schumer has foreshadowed that he intends to bring something forward.
Our view on this at EG is that currently that's very low likelihood in terms of federal legislation at the US level just as a function of congressional gridlock. So there's a low likelihood around that. But it'll be really interesting to see how domestic policy makers, particularly in the US, try to frame this issue because historically and previously, AI risk mitigation and AI regulation was done in a very technical manner by organizations like NIST, standards development organizations like ISO, IEC or in more of a normative or principles based way at the OECD. So there's been a bit of a disconnect from this very technical or high-minded approach, and now we're trying to get where the rubber's hitting the road in terms of practically putting out regulation that'll really impact innovation and impact firms on the day-to-day. We're going to try to figure out how to do that at the same time as this technology is being deployed and becoming quite ubiquitous.
Shari Friedman: Archie, what's your best advice for investors who are wondering what AI means for their portfolios and where the possibilities of peril might be in the coming years?
Archie Foster: First, as I mentioned earlier, the hype machine is gearing up and you need to make sure you understand that. I see increasing headlines about the coming sort of robot apocalypse, but I don't see many about AI sort of, I don't know, helping to cure cancer or something like that. So I'd suggest taking sort of a level-headed approach to these when you see them. And that's not necessarily on a portfolio level, just on a day-to-day level.
Second, as with any exciting new technology, while there will be some wonderful new companies born from it, there will also be a number of companies with less than fully baked models that are still able to attract funding. So my suggestion is to be thorough as opportunities to invest arise and scratch beneath the surface as you're doing the due diligence, because what might be on the theme, might not be what's in the product. I've got a saying that I use with my thematic team at Citi Investment Management, which is great theme, bad stock. And in my mind, AI is a great theme and it will be for some time, but there will be sort of both very good and very bad stocks tied to it. So go into investing in the theme with your eyes wide open and realize that it's kind of early days at this point.
Finally, you need to look at generative AI both from sort of the opportunity side and the risk side for your current holdings. The winners, at least in the early days, as we sort of mentioned, are likely going to be those who will be able to embed AI into their current products, which will drive usage and margins as its companies can upsell people to better versions of their product. The losers are those that either lag behind the integration and lose share, or whose models are based largely on areas that are likely to be disrupted, which is just sort of what we've mentioned earlier. It's very real. It's likely to move pretty fast, and investors need to be thinking about the companies in their portfolios now and figure out what side of the opportunity they fall on.
Shari Friedman: I love that idea, great theme, bad stock. You can pretty much apply that to any new trend.
Dev Saxena: I love that, Archie. If you had told me about that earlier, you would've saved me a lot of money because I wouldn't have bothered in the first place.
Shari Friedman: Well, we're going to have to wrap this up. Archie Foster, managing director and head of Thematic Equities at Citi Investment Management, and Dev Saxena, director of Geopolitics and Technology at Eurasia Group, thanks to you both.
Archie Foster: Absolutely. Thank you for having me.
Dev Saxena: Thanks, Shari. Great to be here. Great conversation.
Shari Friedman: And that's it for this episode of Living Beyond Borders. Listen to all the seasons episodes by heading to gzeromedia.com and click on the Living Beyond Borders tab. Or you can find episodes in the GZERO World Podcast feed wherever you get your podcasts. For GZERO, I'm Shari Friedman. Thanks for listening.
- Episode 6: Can the US and China find common ground? ›
- Episode 3: Inflation Nations: What to know about inflation and interest rates - GZERO Media ›
- Episode 4: Broken (supply) chains - GZERO Media ›
- Episode 5: Energy transition today - GZERO Media ›
- Episode 8: Global food (in)security - GZERO Media ›
- What's next in 2023? - GZERO Media ›