Digital Governance

How tech companies aim to make AI more ethical and responsible

How tech companies aim to make AI more ethical and responsible | Global Stage | GZERO Media

Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.

Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.

And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.

Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call

More For You

US President Donald Trump arrives to announce reciprocal tariffs against US trading partners in the Rose Garden of the White House in Washington, DC, USA, on April 2, 2025.
POOL via CNP/INSTARimages.com

From civil conflicts to trade wars to the rise of new technologies, GZERO runs through the stories that have shaped this year in geopolitics.

Ukrainian serviceman walks near apartment buildings damaged by Russian military strike, amid Russia's attack on Ukraine, in the frontline town of Kostiantynivka in Donetsk region, Ukraine December 20, 2025.
Oleg Petrasiuk/Press Service of the 24th King Danylo Separate Mechanized Brigade of the Ukrainian Armed Forces/Handout via REUTERS

Ukrainian intelligence services assassinated a senior Russian general on the streets of Moscow on Monday, detonating a bomb strapped to his car.