A breakthrough thinker on the impact of technologies on society, Azeem Azhar is both a practitioner and analyst. He is an active investor and entrepreneur in the technology field and an author and producer of a highly renowned newsletter and podcast, "Exponential View." His first book "The Exponential Age: How Accelerating Technology is Transforming Business, Politics and Society" was met with widespread acclaim. Azeem is involved in the World Economic Forum and has served as counsel to senior executives in a number of global firms. He has a degree in Politics, Philosophy, and Economics from Oxford University. X/Twitter: @azeem
A year after the launch of ChatGPT, who are the winners and losers, and what's next? Our new columnist Azeem Azhar, founder of Exponential View, and an author and analyst, weighs in.
It’s hard to believe it’s been less than a year since ChatGPT was unveiled by Sam Altman, the boss of OpenAI. Far from the razzmatazz that normally accompanies Silicon Valley launches, Altman posted an innocuous tweet. And the initial responses could be characterized as bemused delight at seeing a new trinket.
But looking back, we can see that ChatGPT was about to unleash a tidal wave of chaos, not merely on the technology industry but the world at large. That chaos has seen the world’s largest technology firms swing their supertankers volte-face.
The industry thrives off having a new technology platform: Crypto is a bust, and the metaverse is still a pipe dream. But today’s AI — the large language models that operate as the brains in ChatGPT — seems like the real deal.
Precarious presumptions
Many of the Big Tech firms, like Alphabet, which initially developed the transformer technologies that underpin the large language model — along with Amazon, Meta, and Apple — underestimated the impact ChatGPT would have. They have since feverishly chased the generative AI train: Alphabet reorganized all its AI talent under Demis Hassabis and rushed out new products, such as Bard; Meta publicly released an impressive range of open-source AI models; Amazon invested $4 billion in OpenAI's competitor, Anthropic; and Apple is readying its own generative tools. Microsoft, meanwhile, had its ducks in a row. The company had built an important strategic deal with OpenAI, brokered by Reid Hoffman, a much-respected Silicon Valley investor who at the time sat on the board of both firms.
Looking for perspective on AI beyond the hype? Subscribe to our free GZERO AI newsletter, the essential weekly read of the AI revolution.
For years, the received wisdom about artificial intelligence was that it would automate many types of white-collar tasks, starting with routine desk work. The research and market forecasts suggested that those of us doing nonroutine cognitive work — lawyers, strategy consultants, policy wonks, readers like you — perform tasks that are too complex for early AI systems. Rather it would be methodical desk work, such as data entry, document review, and customer service, that would be the easiest to automate.
A very different reality
A new study from Harvard Business School and Boston Consulting Group, the white shoe consultants, ixnayed that assumption. They tested nearly 800 consultants, likely graduates of the world’s most selective schools, on typical strategy consulting tasks. Half the group had help from ChatGPT, and the other half worked on their own. The results were stunning. On average, the consultants using ChatGPT completed their work 25.1% faster. And the bottom half of consultants saw the quality of their output increase by 43% — taking their average performance to well above that of the unaided consultant.
This result — matched by other research — throws received wisdom out the window. Even nonroutine work can benefit from AI. And we're not talking about highly advanced AI but rather garden-variety AI people can access on their phones. As a result, employees will be enticed by the productivity gains of using ChatGPT to ignore corporate security policies. The personal win — better quality work, more free time — will be too great for workers. Employers will struggle to rein in this behavior and expose their firms to new potential liabilities.
The road to standardization
At the same time, powerful general technologies do not necessarily work in favor of the employee, as bosses are tempted to substitute capital (machines) for labor (people). Historically, general-purpose technologies have become the sites of political contestation: Think of workers protesting power looms and assembly lines. The dispute is not about the technologies themselves but rather how the gains from the technology are split. It is a fight over power.
The recent screenwriters' strike in Hollywood is just such a battle. In a sense, it is less about the technology and more about the terms on which it is introduced. Similar fights will erupt in different industries and countries in the coming years until new norms emerge. Several artists and writers have filed lawsuits against OpenAI for training its systems in their creative endeavors.
During the Industrial Revolution, the process of normalizing standards took several decades in 18th and 19th century England. The workers’ plight worsened as the gains from automation went to shareholders, giving rise to heart-rending stories Charles Dickens tells. It was likely the success of labor movements that helped wages catch up.
And the tension with workers will be only one fault line. Governments are critical to the process of developing standards and norms, and yet their record of dealing with the impact of technologies in recent decades has been poor. Once the internet went mainstream in the late 1990s, catalyzed by the Clinton-Gore administration, successive American and European governments did little to advance the institutional or regulatory reform this expanded industry needed.
After 9/11, the US government became overly enamored with the surveillance capabilities afforded by the internet and the soft power big American tech firms offered. Washington did little to address the anti-competitive and politically polarizing side effects that allowed tech to morph into Big Tech.
Even late last year, governments were moving in mass but slowly to confront these questions. ChatGPT woke everyone up. Whether in China, the US, the EU, or the UK, figuring out what the institutional guardrails around AI should be has become a belated priority. In the UK, Rishi Sunak is making a late play for global leadership by hosting, this week, an AI Safety Summit with a view toward building a scientifically robust international agency, like the IPCC, to help evaluate, identify, and manage the most worrisome risks posed by AI.
The UN’s Antonio Guterres has announced his own AI advisory body, which may help the Global South develop a voice in how we contend with the beneficial deployment of AI.
Even perfectly designed, which nothing can be, a general-purpose technology will force changes to the rules and behaviors in a society. As I write in my book, the accelerating pace of change means we have a smaller window than normal to turn this chaos into some semblance of order. And that order will require effective national and multilateral governance and institutions that support them. No one quite knows, nor will we know for a while, what “effective” means in this context. Acting too quickly raises more risks: rash regulation, a paucity of deliberation, and, most likely, the exclusion of groups lacking the resources to mount effective lobbying.
If the first year of ChatGPT’s launch was marked by chaos, I doubt, given the accelerating pace of technology, the next year will have less turmoil. But it may, at least, be accompanied by a wider consensus endeavoring to erect some scaffolding from which effective governance, leading to more equitable prosperity, might emerge in the coming years.