Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
How OpenAI CEO Sam Altman became the most influential voice in tech
OpenAI CEO Sam Altman has become the poster child for AI, but it's difficult to understand his motivations.
Artificial intelligence was a major buzzword at the World Economic Forum in Davos this year, and OpenAI CEO Sam Altman was the hottest ticket in town. CEOs and business leaders crowded into sold-out conference halls to hear his take on the current explosion in generative AI and where the technology is headed.
On GZERO World, Ian Bremmer sat down with AI expert and author Azeem Azhar and asked why everyone, both at Davos and in the tech community as a whole, seems to be pinning their hopes and fears about the future of AI on Altman. Azhar says that there are actually a lot of similarities between the individual and the technology he works on.
“I like to think of [Altman] as someone who has been fine-tuned to absolute perfection,” Azhar explains, “And fine-tuning is what you do to an AI model to get it to go from spewing out garbage to being as wonderful as ChatGPT is. And I think Sam’s gone through the same process.”
Watch full episode: How AI is changing the world of work
Catch GZERO World with Ian Bremmer on US public television every week. Check local listings.
How AI is changing the world of work
The AI revolution is coming… fast. But what does that mean for your job? GZERO World with Ian Bremmer takes a deep dive into this exciting and anxiety-inducing new era of generative artificial intelligence. Generative AI tools like ChatGPT and Midjourney have the potential to increase productivity and prosperity massively, but there are also fears of job replacement and unequal access to technology.
Ian Bremmer sat down with tech expert Azeem Azhar and organizational psychologist Adam Grant on the sidelines of the World Economic Forum in Davos, Switzerland to hear how CEOs are already incorporating AI into their businesses, what the future of work might look like as AI tools become more advanced, and what the experts are still getting wrong about the most powerful technology to hit the workforce since the personal computer.
“One of the dangers of last year was that people started to lose their faith in technology and technology is what provides prosperity,” Azhar says, “We need to have more grownup conversations, more civil conversations, more moderate conversations about what that reality is.
Catch GZERO World with Ian Bremmer on US public television every week or on US public television. Check local listings.
- AI at the tipping point: danger to information, promise for creativity ›
- Hard Numbers: Must-have accessory?, Americans on AI, Bill Gates’ prediction, Massive paychecks, Airbnb's big bet ›
- Political fortunes, job futures, and billions hang in the balance amid labor unrest ›
- Larry Summers: Which jobs will AI replace? ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake ›
- How neurotech could enhance our brains using AI - GZERO Media ›
- Can AI help doctors act more human? - GZERO Media ›
AI and the future of work: Experts Azeem Azhar and Adam Grant weigh in
Listen:What does this new era of generative artificial intelligence mean for the future of work? On the GZERO World Podcast, Ian Bremmer sits down with tech expert Azeem Azhar and organizational psychologist Adam Grant on the sidelines of the World Economic Forum in Davos, Switzerland, to learn more about how this exciting and anxiety-inducing technology is already changing our lives, what comes next, and what the experts are still getting wrong about the most powerful technology to hit the workforce since the personal computer.
The rapid advances in generative AI tools like ChatGPT, which has only been public for a little over a year, are stirring up excitement and deep anxieties about how we work and if we work. Artificial intelligence can potentially increase productivity and prosperity massively, but there are fears of job replacement and unequal access to technology. Will AI be the productivity booster CEOs hope for, the job killer employees fear?
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.Grown-up AI conversations are finally happening, says expert Azeem Azhar
“The thing that’s surprised me most is how well CEOs are [now] articulating generative AI, this technology that’s only been public for a year or so,” Azhar says,” “I’ve never experienced that in my life and didn’t realize how quickly they’ve moved.”
Azhar and Bremmer also discuss the underlying technology that’s allowed generative AI tools like ChatGPT-4 to advance so quickly and where conversations about applications of artificial intelligence go from here. Whereas a year ago, experts were focused on the macro implications of existential risk, Azhar is excited this year to hear people focus on practical things like copyright and regulation—the small yet impactful things that move the economy and change how we live our lives.
Catch Azeem Azhar's full conversation with Ian Bremmer in next week's episode of GZERO World on US public television. Check local listings.
Azeem Azhar explores the future of AI
AI was all the rage at Davos this year – and for good reason. As we’ve covered each week in our weekly GZERO AI newsletter, artificial intelligence is impacting everything from regulatory debates and legal norms to climate change, disinformation, and identity theft. GZERO Media caught up with Azeem Azhar, founder of Exponential View, an author and analyst, and a GZERO AI guest columnist, for his insights on the many issues facing the industry.
GZERO: Whether The New York Times’ lawsuit against OpenAI on copyright grounds is settled, or found for or against OpenAI, do you think large language models are less feasible in the long term?
Azeem Azhar: Copyright has always been a compromise. The compromise has been between how many rights should be afforded to creators, and ultimately, of course, what that really means is the big publishers who accumulate them and have the legal teams.
And harm is being done to research, free exchange of knowledge, cultural expression by creating these enclosures around our intellectual space. This compromise, which worked reasonably well perhaps 100 years ago doesn't really work that well right now.
And now we have to say, “Well, we've got this new technology that could provide incredibly wide human welfare and when copyright was first imagined, those were not the fundamental axioms of the world.”
GZERO: Can you give me an example of something that could be attained by reforming copyright laws?
Azhar: Take Zambia. Zambia doesn't have very many doctors per capita. And because they don't have many doctors, they can't train many doctors. So you could imagine a situation where you can have widespread personalized AI tutoring to improve primary, secondary, tertiary, and educational outcomes for billions of people.
And those will use large language models dependent on a vast variety of material that will fall under the sort of traditional frame of copyright.
GZERO: AI is great at finding places to be more efficient. Do you think there's a future in which AI is used to decrease the world's net per capita energy consumption?
Azhar: No, we won't decrease energy consumption because energy is health and energy is prosperity and energy is welfare. Over the next 30 years, energy use will grow higher and at a higher rate than it has over the last 30, and at the same time, we will entirely decarbonize our economy.
Effectively, you cannot find any countries that don't use lots of energy that you would want to live in and that are safe and have good human outcomes.
But how can AI help? Well, look at an example from DeepMind. DeepMind released this thing called GNoME at the end of last year, which helps identify thermodynamically stable materials.
And DeepMind’s system delivered 60 years of stable producible materials with their physical properties in just one shot. Now that's really important because a lot of the climate transition and the materiality question is about how we produce all the stuff for your iPods and your door frames and your water pipes in ways that are thermodynamically more efficient, and that's going to require new materials and so AI can absolutely help us do that.
GZERO: In 2024, we are facing over four dozen national-level elections in a completely changed disinformation environment. Are you more bullish or bearish on how governments might handle the challenge of AI-driven disinformation?
Azhar: It does take time for bad actors to actually make use of these technologies, so I don't think that deep fake video will significantly play a role this year because it's just a little bit too soon.
But distribution of disinformation, particularly through social media, matters a great deal and so too do the capacities and the behaviors of the media entities and the political class.
If you remember in Gaza, there was an explosion at a hospital, and one of the newswires reported immediately that 500 people had been killed and they reported this within a few minutes. There's no way that within a few minutes one can count 500 bodies. But other organizations then picked it up, who are normally quite reputable.
That wasn't AI-driven disinformation. The trouble is the lie travels halfway around the world before the truth gets its trousers on. Do media companies need to put up a verification unit as the goalkeeper? Or do you put the idea of defending the truth and veracity and factuality throughout the culture of the organization?
GZERO: You made me think of an app that's become very popular in Taiwan over the last few months called Auntie Meiyu, which allows you to take a big group chat, maybe a family chat for example, and then you add Auntie Meiyu as a chatbot. And when Grandpa sends some crazy article, Auntie Meiyu jumps in and says, “Hey, this is BS and here’s why.”
She’s not preventing you from reading it. She's just giving you some additional information, and it's coming from a third party, so no family member has to take the blame for making Grandpa feel foolish.
Azhar: That is absolutely brilliant because, when you look back at the data from the US 2016 election, it wasn't the Instagram, TikTok, YouTube teens who were likely to be core spreaders of political misinformation. It was the over-60s, and I can testify to that with some of my experience with my extended family as well.
GZERO: As individuals are thinking about risks that AI might pose to them – elderly relatives being scammed or someone generating fake nude images of real people – is there anything an individual can do to protect themselves from some of the risks that AI might pose to their reputation or their finances?
Azhar: Wow, that's a really hard question. Have really nice friends.
I am much more careful now than I was five years ago and I'm still vulnerable. When I have to make transactions and payments I will always verify by doing my own outbound call to a number that I can verify through a couple of other sources.
I very rarely click on links that are sent to me. I try to double-check when things come in, but this is, to be honest, just classic infosec hygiene that everyone should have.
With my elderly relatives, the general rule is you don't do anything with your bank account ever unless you've got one of your kids with you. Because we’ve found ourselves, all of us, in the digital equivalent of that Daniel Day-Lewis film “Gangs of New York,” where there are a lot of hoodlums running around.
GZERO AI launches October 31st
There is no more disruptive or more remarkable technology than AI, but let’s face it, it is incredibly hard to keep up with the latest developments. Even more importantly, it’s almost impossible to understand what the latest AI innovations actually mean. How will AI affect your job? What do you need to know? Who will regulate it? How will it disrupt work, the economy, politics, war?
That's where our new weekly GZERO AI newsletter comes in to help. GZERO AI will give you the first key insights you need to know, putting perspective on the hype and context on the AI doomers and dreamers. Featuring the world class analysis that is the hallmark of GZERO and its founder, Ian Bremmer--who himself is a leading voice in the AI space--GZERO AI is the essential weekly read of the AI revolution.
Our goal is to deliver understanding as well as news, to turn information into perspective and data into insights. GZERO AI will feature some of the world’s most important voices on technology, such as our weekly data columnist Azeem Azhar, and our video columnists Marietje Schaake and Taylor Owen. GZERO AI is your essential tool to understanding the technology that...is understanding you!
Sign up now for GZERO AI (along with GZERO's other newsletters.)
You say you want AI revolution?
A year after the launch of ChatGPT, who are the winners and losers, and what's next? Our new columnist Azeem Azhar, founder of Exponential View, and an author and analyst, weighs in.
It’s hard to believe it’s been less than a year since ChatGPT was unveiled by Sam Altman, the boss of OpenAI. Far from the razzmatazz that normally accompanies Silicon Valley launches, Altman posted an innocuous tweet. And the initial responses could be characterized as bemused delight at seeing a new trinket.
But looking back, we can see that ChatGPT was about to unleash a tidal wave of chaos, not merely on the technology industry but the world at large. That chaos has seen the world’s largest technology firms swing their supertankers volte-face.
The industry thrives off having a new technology platform: Crypto is a bust, and the metaverse is still a pipe dream. But today’s AI — the large language models that operate as the brains in ChatGPT — seems like the real deal.
Precarious presumptions
Many of the Big Tech firms, like Alphabet, which initially developed the transformer technologies that underpin the large language model — along with Amazon, Meta, and Apple — underestimated the impact ChatGPT would have. They have since feverishly chased the generative AI train: Alphabet reorganized all its AI talent under Demis Hassabis and rushed out new products, such as Bard; Meta publicly released an impressive range of open-source AI models; Amazon invested $4 billion in OpenAI's competitor, Anthropic; and Apple is readying its own generative tools. Microsoft, meanwhile, had its ducks in a row. The company had built an important strategic deal with OpenAI, brokered by Reid Hoffman, a much-respected Silicon Valley investor who at the time sat on the board of both firms.
Looking for perspective on AI beyond the hype? Subscribe to our free GZERO AI newsletter, the essential weekly read of the AI revolution.
For years, the received wisdom about artificial intelligence was that it would automate many types of white-collar tasks, starting with routine desk work. The research and market forecasts suggested that those of us doing nonroutine cognitive work — lawyers, strategy consultants, policy wonks, readers like you — perform tasks that are too complex for early AI systems. Rather it would be methodical desk work, such as data entry, document review, and customer service, that would be the easiest to automate.
A very different reality
A new study from Harvard Business School and Boston Consulting Group, the white shoe consultants, ixnayed that assumption. They tested nearly 800 consultants, likely graduates of the world’s most selective schools, on typical strategy consulting tasks. Half the group had help from ChatGPT, and the other half worked on their own. The results were stunning. On average, the consultants using ChatGPT completed their work 25.1% faster. And the bottom half of consultants saw the quality of their output increase by 43% — taking their average performance to well above that of the unaided consultant.
This result — matched by other research — throws received wisdom out the window. Even nonroutine work can benefit from AI. And we're not talking about highly advanced AI but rather garden-variety AI people can access on their phones. As a result, employees will be enticed by the productivity gains of using ChatGPT to ignore corporate security policies. The personal win — better quality work, more free time — will be too great for workers. Employers will struggle to rein in this behavior and expose their firms to new potential liabilities.
The road to standardization
At the same time, powerful general technologies do not necessarily work in favor of the employee, as bosses are tempted to substitute capital (machines) for labor (people). Historically, general-purpose technologies have become the sites of political contestation: Think of workers protesting power looms and assembly lines. The dispute is not about the technologies themselves but rather how the gains from the technology are split. It is a fight over power.
The recent screenwriters' strike in Hollywood is just such a battle. In a sense, it is less about the technology and more about the terms on which it is introduced. Similar fights will erupt in different industries and countries in the coming years until new norms emerge. Several artists and writers have filed lawsuits against OpenAI for training its systems in their creative endeavors.
During the Industrial Revolution, the process of normalizing standards took several decades in 18th and 19th century England. The workers’ plight worsened as the gains from automation went to shareholders, giving rise to heart-rending stories Charles Dickens tells. It was likely the success of labor movements that helped wages catch up.
And the tension with workers will be only one fault line. Governments are critical to the process of developing standards and norms, and yet their record of dealing with the impact of technologies in recent decades has been poor. Once the internet went mainstream in the late 1990s, catalyzed by the Clinton-Gore administration, successive American and European governments did little to advance the institutional or regulatory reform this expanded industry needed.
After 9/11, the US government became overly enamored with the surveillance capabilities afforded by the internet and the soft power big American tech firms offered. Washington did little to address the anti-competitive and politically polarizing side effects that allowed tech to morph into Big Tech.
Even late last year, governments were moving in mass but slowly to confront these questions. ChatGPT woke everyone up. Whether in China, the US, the EU, or the UK, figuring out what the institutional guardrails around AI should be has become a belated priority. In the UK, Rishi Sunak is making a late play for global leadership by hosting, this week, an AI Safety Summit with a view toward building a scientifically robust international agency, like the IPCC, to help evaluate, identify, and manage the most worrisome risks posed by AI.
The UN’s Antonio Guterres has announced his own AI advisory body, which may help the Global South develop a voice in how we contend with the beneficial deployment of AI.
Even perfectly designed, which nothing can be, a general-purpose technology will force changes to the rules and behaviors in a society. As I write in my book, the accelerating pace of change means we have a smaller window than normal to turn this chaos into some semblance of order. And that order will require effective national and multilateral governance and institutions that support them. No one quite knows, nor will we know for a while, what “effective” means in this context. Acting too quickly raises more risks: rash regulation, a paucity of deliberation, and, most likely, the exclusion of groups lacking the resources to mount effective lobbying.
If the first year of ChatGPT’s launch was marked by chaos, I doubt, given the accelerating pace of technology, the next year will have less turmoil. But it may, at least, be accompanied by a wider consensus endeavoring to erect some scaffolding from which effective governance, leading to more equitable prosperity, might emerge in the coming years.
The transformative potential of artificial intelligence
Microsoft reportedly plans to invest $10 billion investment in OpenAI, the artificial intelligence company famous for creating the ChatGPT bot.
Why is the software giant doing this despite the threat that AI poses to democracy? Azeem Azhar, the founder of the Exponential View newsletter, puts the question to Microsoft President Brad Smith during a Global Stage livestream conversation hosted by GZERO in partnership with Microsoft.
First, Smith explains, Microsoft has strict compliance rules to ensure that ChatGPT doesn't do bad stuff. And that's crucial to integrate the tech into all its products so anyone can use it.
Second, he believes that generative AI — bots becoming as smart as humans — can be a tool for both creative expression and critical thinking.
Watch the full Global Stage conversation: AI at the tipping point: danger to information, promise for creativity
- AI at the tipping point: danger to information, promise for creativity ... ›
- Ian Bremmer: How AI may destroy democracy ›
- Microsoft president Brad Smith has a plan to meet the UN's goals ›
- Brad Smith: Russia's war in Ukraine started on Feb 23 in cyberspace ›
- Can the US stay ahead of China on AI? - GZERO Media ›
- Ian Explains: The dark side of AI - GZERO Media ›
- The geopolitics of AI - GZERO Media ›