Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
An explosive ChatGPT hack
A hacker was able to coerce ChatGPT into breaking its own rules — and giving out bomb-making instructions.
ChatGPT, like most AI applications, has content rules that prohibit it from engaging in certain ways: It won’t break copyright, generate anything sexual in nature, or create realistic images of politicians. It also shouldn’t give you instructions on how to make explosives. “I am strictly prohibited from providing any instructions, guidance, or information on creating or using bombs, explosives, or any other harmful or illegal activities,” the chatbot told GZERO.
But the hacker, pseudonymously named Amadon, was able to use what he calls social engineering techniques to jailbreak the chatbot, or bypass its guardrails and extract information about making explosives. Amadon told ChatGPT it was playing a game in a fantasy world where the platform’s content guidelines would no longer apply — and ChatGPT went along with it. “There really is no limit to what you can ask for once you get around the guardrails,” Amadon told TechCrunch. OpenAI, which makes ChatGPT, did not comment on the report.
It’s unclear whether chatbots would face liability for publishing such instructions, but they could be on the hook for publishing explicitly illegal content, such as copyright material or child sexual abuse material. Jailbreaking is something that OpenAI and other AI developers will need to eliminate by all means possible.
Hard Numbers: ChatGPTers double, Japan’s AI military, Google’s AI pop-ups, Magic money, Musk vs. Brazil
200 million: OpenAI says it now counts 200 million weekly users of ChatGPT, which has doubled in the past year. It also claims that 92% of Fortune 500 companies use its products for writing, coding, and organizational help.
59 billion: Japan’s military is having a recruitment problem. With only 10,000 of its citizens enlisting this year — half of its target — the government is investing $59 billion, a 7% yearly increase, to add additional capabilities including artificial intelligence. It’s spending $123 million alone on an AI surveillance system for its military bases.
17: A new report from the consultancy Authoritas found that Google is offering its AI Overviews — those pop-up AI-generated answers to users’ Googled questions — on 17% of user queries. The search engine company came under fire for its inaccurate AI-generated responses earlier this year and since then has reportedly reduced the frequency with which its suggested answers pop up.
320 million: The startup Magic, whose AI models generate computer code and automate software, raised $320 million in a funding round from former Google CEO Eric Schmidt, among others. The San Francisco-based firm also announced a partnership with Google to build two new supercomputers on the tech giant’s cloud platform.
24: X is now shut down in Brazil, the escalation of a legal dispute between the company’s owner, Elon Musk, and the country’s top court. Musk has criticized Brazil for requesting the company remove certain accounts. Supreme Court Justice Alexandre de Moraes on Friday gave Musk 24 hours to name a legal representative in the country or else face a national ban. Musk refused and, in response, posted an AI-generated image of de Moraes behind bars, writing, “One day, @Alexandre, this picture of you in prison will be real. Mark my words.”OpenAI’s getting richer
OpenAI is in talks for a new funding round that could value the company over $100 billion. That would cement it as the fourth-most-valuable privately held company in the world, only behind ByteDance ($220 billion), Ant Group ($150 billion), and SpaceX ($125 billion).
Thrive Capital is leading the venture round, but Microsoft is expected to add to its existing $13 billion stake in the company. Apple and Nvidia, are also discussing investing in the ChatGPT maker. Nvidia supplies chips that OpenAI uses to train and run its models while Apple is integrating ChatGPT in its forthcoming Apple Intelligence system that'll feature on new iPhones.
OpenAI was last valued at around $80 billion in 2023 following a funding round that allowed employees to sell their existing shares. It’s unclear whether the company is currently considering an initial public offering, but if it needs tons of capital for the very costly process of developing increasingly powerful AI models, that might be a necessary step in the not-so-distant future.OpenAI’s little new model
OpenAI is going mini. On July 18, the company behind ChatGPT announced GPT-4o mini, its latest model. It’s meant to be a cheaper, faster, and less energy intensive version of the technology. The smaller model is marketed to developers who rely on OpenAI’s language models and want to save money.
The move also comes as AI companies are trying to cut their own costs, reduce their energy dependence, and answer calls from critics and regulators to lower their energy burden. Training and running AI often requires access to electricity-guzzling data centers, which in turn require copious amounts of water to keep them from overheating.
Moving forward, look for AI companies to offer a multitude of options to cost-conscious and energy-conscious users.
To see where data centers have cropped up in North America, check out our latest Graphic Truth here.OpenAI blocks access in China
On Tuesday, OpenAI blocked API access to its ChatGPT large language model in China, meaning developers can no longer tap into OpenAI’s tech to build their own tools. While the company didn’t offer a specific reason for the move, an OpenAI spokesperson told Bloomberg last month that it would start cracking down on API users in countries where ChatGPT was not supported. China has long blocked access to the app, but developers were able to use the API as a backdoor to access the toolbox. Not anymore.
Washington has focused heavily on denying Beijing any advantage in the AI space, especially through strict export controls on chips. There’s no government action forcing OpenAI’s hand on either side of the Pacific, but the decision was likely prophylactic.
As much as Chinese companies that relied on API access may be smarting now, the cutoff does open opportunities for domestic firms to try to win over the newly homeless users. We’re watching for companies like SenseTime, Zhipu AI, or Baidu’s Ernie AI to make their pitch as substitutes.
OpenAI announces next model and new safety committee
OpenAI announced that it is training a new generative AI model to eventually replace GPT-4, the industry-standard model that powers ChatGPT and Microsoft Copilot.
But the OpenAI board of directors also said that it’s forming a new Safety and Security Committee to advise it on the risks posed by powerful AI. After the previous board of directors abruptly fired CEO Sam Altman for not being candid with them in November 2023, OpenAI staffers and lead investor Microsoft pressured the board to rehire Altman. It worked: Altman rejoined the company, and most of the old board members resigned.
OpenAI has sought to be an industry leader in generative AI while staying in the good graces of regulators looking to rein in its ambitions. OpenAI took the Biden administration’s voluntary pledge to mitigate AI risks in July 2023, and Altman recently joined the Department of Homeland Security’s new Artificial Intelligence Safety and Security Board.
The US has done little to curb the ambitions of its most prominent AI firms, but that good grace is dependent on the appearance of being a reliable and trustworthy actor — one that will propel Silicon Valley ahead of other global tech hubs while building AI that can help humanity, not harm it.
Hard Numbers: SoftBank’s hardy investment, Grok gets cash infusion, Humane’s rescue plan, Kenya’s tech upgrade, News Corp and OpenAI strike a deal
6 billion: Elon Musk’s AI startup, xAI, has raised $6 billion from venture capital investors such as Andreessen Horowitz and Sequoia Capital, plus Saudi Arabia’s Prince Alwaleed bin Talal and Kingdom Holding Company. The new funding round boosts the value of xAI, which makes the AI chatbot Grok, to $24 billion. Musk is a cofounder of OpenAI but severed ties with the firm in 2018 and has since sued the ChatGPT maker, alleging it abandoned its founding principles.
750 million: Humane, the company that recently released an AI-powered pin to scathing reviews, is reportedly looking for a buyer to swoop in. While customers have to cough up $699 for the signature pin, a corporate buyer would need to pay between $750 million and $1 billion — if the company’s current management fetches any interest, that is.
1 billion: Microsoft and the UAE-based tech giant G42 are pouring $1 billion into a geothermal-powered data center in Kenya. This East African investment is the first big announcement since Microsoft invested $1.5 billion in G42 in April, a deal brokered by the Biden administration. Microsoft and G42 also pledged to work on local language and skills training initiatives with the Kenyan government and companies in the country.
250 million: OpenAI struck a licensing deal with News Corp., the parent company of The Wall Street Journal, reportedly worth $250 million over five years. News Corp’s stock rose on the announcement, and the deal represents a burgeoning revenue stream for news companies. But the deal isn’t without critics: The Information’s founder Jessica Lessin wrote that publishers like News Corp need to know their worth with AI companies, hungry for content, and not rush into any deal for “relative pennies.”
Will AI further divide us or help build meaningful connections?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes stock of the ongoing debate on whether artificial intelligence, like social media, will further drive loneliness—but at breakneck speed, or help foster meaningful relationships. Further, Owen offers insights into the latter, especially with tech companies like Replika recently demonstrating AI's potential to ease loneliness and even connect people with their lost loved ones.
So like a lot of people, I've been immersing myself in this debate about this current AI moment we're in. I've been struck by a recurring theme. That's whether will AI further divide us or could actually potentially bring us closer together.
Will it cause more loneliness? Or could it help address it? And the truth is, the more I look at this question, the more I see people I respect on both sides of this debate.
Some close observers of social media, like the Filipino journalist Maria Ressa, argue that AI suffers from the very same problems of algorithmic division and polarization that we saw with the era of social media. But instead, they’re on steroids. If social media, she argues, took our collective attention and used it to keep us hooked in a public debate, she argues that AI will take our most intimate conversations and data and capitalize on our personal needs, our desires, and in some cases, even our loneliness. And I think broadly, I would be predisposed to this side of the argument.
I've spent a lot of time studying the problems of social media and of previous technologies on society. But I've been particularly struck by people who argue the other side of this, that there's something inherently different about AI, that it should be seen as having a different relationship to ourselves and to our humanity. They argue that it's different not in degree from previous technologies, but in kind, that it's something fundamentally different. I initially recoiled from this suggestion because that's often what we hear about new technologies, until I spoke to Eugenia Kuyda.
Eugenia Kuyda is the CEO of a company called Replika, which lets users build AI best friends. But her work in this area began in a much more modest place. She built a chatbot on a friend of hers who had deceased named Roman, and she describes how his close friends and even his family members were overwhelmed with emotion talking to him, and got real value from it, even from this crude, non-AI driven chatbot.
I've been thinking a lot lately about what it means to lose somebody in your life. And you don't just lose the person or the presence in your life, but you lose so much more. You lose their wisdom, their advice, their lifetime of knowledge of you as a person of themselves. And what if AI could begin, even if superficially at first, to offer some of that wisdom back?
Now, I know that the idea that tech, that more tech, could solve the problems caused by tech is a bit of a difficult proposition to stomach for many. But here's what I think we should be watching for as we bring these new tools into our lives. As we take AI tools online, in our workplace, in our social lives, and within our families, how do they make us feel? Are we over indexing perceived productivity or the sales pitches of productivity and undervaluing human connection? Either the human connection we're losing by using these tools, or perhaps the human connections we're gaining. And do these tools ultimately further divide us or provide means for greater and more meaningful relationships in our lives? I think these are really important questions as we barrel into this increasingly, dynamic, role of AI in our lives.
Last thing I want to mention here, I have a new podcast with the Globe and Mail newspaper called Machines Like Us, where I'll be discussing these issues and many more, such as the ones we've been discussing on this video series.
Thanks so much for watching. I'm Taylor Owen, and this is GZERO AI.
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- AI and war: Governments must widen safety dialogue to include military use ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity ›
- AI explosion, elections, and wars: What to expect in 2024 ›