Search
AI-powered search, human-powered content.
scroll to top arrow or icon

{{ subpage.title }}

OpenAI ChatGPT website displayed on a laptop screen is seen in this illustration photo taken in Krakow, Poland on September 9, 2024.

(Photo by Jakub Porzycki/NurPhoto)

An explosive ChatGPT hack

A hacker was able to coerce ChatGPT into breaking its own rules — and giving out bomb-making instructions.

ChatGPT, like most AI applications, has content rules that prohibit it from engaging in certain ways: It won’t break copyright, generate anything sexual in nature, or create realistic images of politicians. It also shouldn’t give you instructions on how to make explosives. “I am strictly prohibited from providing any instructions, guidance, or information on creating or using bombs, explosives, or any other harmful or illegal activities,” the chatbot told GZERO.

Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest