Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Paging Hezbollah: Apparent Israeli attack wounds hundreds
People of a certain age will recall the metaphoric expression “blowing up my pager,” but this was something altogether more literal: On Tuesday at around 3:30 p.m. local time, pagers belonging to more than 2,800 people in Lebanon and Syria actually blew up, killing at least 12, including two children, and wounding thousands.
The pagers were reportedly used by affiliates of Hezbollah, the powerful, Iran-backed militant group and political party that is currently locked in a low-level war with Israel. The group recently bought the pagers to evade signal tracking. The victims, many of whom were reportedly civilians, included Iran’s ambassador to Lebanon, who was wounded.
Who did it? Lebanon and Hezbollah blame Israel, but Israeli authorities have not commented.
How does one even do this? There are two theories. One is a mass hack of the pagers, which forced their lithium batteries to overload until they spontaneously combusted. But some outlets are pointing to packing, not hacking: Israel's Mossad spy agency reportedly hid explosives inside the pagers, which were manufactured in Budapest by BAC Consulting. The Taiwan-based firm Gold Apollo licensed its brand out to BAC but firmly denies involvement in making the pagers. Either way, a stunning operation.
The attack is a strategic blow to Hezbollah’s communications, but also a psychological one. Hezbollah has sworn to retaliate. So far the group – and its Iranian patrons – have tried to avoid a wider conflict. But after Israel appears to have paged Hezbollah like this – how will the group respond?
An explosive ChatGPT hack
A hacker was able to coerce ChatGPT into breaking its own rules — and giving out bomb-making instructions.
ChatGPT, like most AI applications, has content rules that prohibit it from engaging in certain ways: It won’t break copyright, generate anything sexual in nature, or create realistic images of politicians. It also shouldn’t give you instructions on how to make explosives. “I am strictly prohibited from providing any instructions, guidance, or information on creating or using bombs, explosives, or any other harmful or illegal activities,” the chatbot told GZERO.
But the hacker, pseudonymously named Amadon, was able to use what he calls social engineering techniques to jailbreak the chatbot, or bypass its guardrails and extract information about making explosives. Amadon told ChatGPT it was playing a game in a fantasy world where the platform’s content guidelines would no longer apply — and ChatGPT went along with it. “There really is no limit to what you can ask for once you get around the guardrails,” Amadon told TechCrunch. OpenAI, which makes ChatGPT, did not comment on the report.
It’s unclear whether chatbots would face liability for publishing such instructions, but they could be on the hook for publishing explicitly illegal content, such as copyright material or child sexual abuse material. Jailbreaking is something that OpenAI and other AI developers will need to eliminate by all means possible.