Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Is the existential threat of AI overblown?
A recent study from the University of Bath in England and the Technical University of Darmstadt in Germany found that artificial intelligence doesn’t actually pose an existential risk to humans.
Researchers found that ChatGPT cannot learn independently of humans and acquire new skills, meaning that they cannot eventually grow smart enough to kill us all — to put it bluntly.
“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” the study’s co-author, Harish Tayyar Madabushi, said.
Fears of AI takeover have been conceptualized for decades by science fiction writers but also by those building artificial intelligence models. Recently, they’ve formed the basis of some popular philosophy by Silicon Valley luminaries including OpenAI’s Sam Altman and Ilya Sutskever. Existential risk has also become a focus of regulators, including those at last year’s Bletchley Park Summit in the United Kingdom, hosted by former Prime Minister Rishi Sunak.
There are still plenty of real risks from AI, including its use for perpetuating bias, spreading disinformation, and stealing artists’ intellectual property. But maybe now we can all take a breath and relax about the end of humanity — that’s what our AI overlords would want anyway, right?