Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Congress keeps it old school
Last June, the House of Representatives banned staff use of ChatGPT — the free version at least. Now, it’s telling staffers that use of Microsoft’s Copilot, a tool built on the same large language model as ChatGPT, is also prohibited.
“The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services,” House Chief Administrative Officer Catherine Szpindorwrote in a guidance distributed to congressional offices. In response, Microsoft said it’s working on a government-specific version of the product with greater data security set to release later this year.
The Departments of Energy, Veterans Affairs, and Agriculture have also taken steps to ban generative AI tools in recent months. So has the Social Security Administration. Governments need to be able to make sure that allowing such systems into their workplaces, interacting with sensitive or even classified data, won’t lead to that information leaking to a broader commercial or consumer user base.Troubling images plague Microsoft’s Copilot
Mere weeks after Google suspended its Gemini text-to-image generator for producing offensive images, Microsoft is facing similar turmoil over one of its products.
According to CNBC, which replicated the results, an engineer at Microsoft was able to use its Copilot tool to generate “demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use.”
It also generated disturbing images that doubled as potential copyright violations, such as Disney-branded handguns, beer cans, and vape pens. And it gets more troubling: The tool created images of Elsa from “Frozen” amid the wreckage in the Gaza Strip, but also wearing an Israel Defense Forces uniform.
The Microsoft employee, Shane Jones, recently notified the Federal Trade Commission of what he saw while working as a red-teamer tasked with testing this technology, which is powered by OpenAI through a partnership with the ChatGPT and DALL-E maker.
In response, Microsoft has begun blocking some of the terms that generated offensive imagery, including “pro-choice,” “pro-life,” and “four twenty.” Microsoft told CNBC the changes were due to “continuously monitoring, making adjustments, and putting additional controls in place.”
This reflects an ongoing cycle: The worst abuses of generative AI will only come through people testing and finding out just what horrors it can produce, which will lead to stricter usage policies – and new limits to push. Of course, when copyright violations are involved, the cycle can very quickly get disrupted by lawsuits from IP holders desperate to protect their brands.