Digital Governance

How tech companies aim to make AI more ethical and responsible

How tech companies aim to make AI more ethical and responsible | Global Stage | GZERO Media

Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.

Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.

And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.

Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call

More For You

Participants and protesters hold posters opposing Japan’s Prime Minister Sanae Takaichi's administration and her policies on constitutional revision and military expansion during a Constitution Memorial Day rally in Tokyo, Japan, May 3, 2026.
REUTERS/Issei Kato.

Will Japan rewrite its rules of war? Europe meets (again) to shape its own defense destiny, US to “guide” ships through Hormuz

Natalie Johnson

Putin is increasingly paranoid, according to a Financial Times report out today. Security has been tightened, more time is being spent in underground bunkers, and the vast majority of his attention is being absorbed by Russia’s war with Ukraine. One reason of his concern is said to be Ukraine’s drone capabilities, which have demonstrated an ability to strike Russian airfields thousands of miles from Kyiv.