How tech companies aim to make AI more ethical and responsible

How tech companies aim to make AI more ethical and responsible | Global Stage | GZERO Media

Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.

Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.

And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.

Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call

More from GZERO Media

Britain's Prime Minister Keir Starmer is flanked by Ukraine's President Volodymyr Zelenskiy and NATO Secretary-General Mark Rutte, Denmark's Prime Minister Mette Frederiksen and Dutch Prime Minister Dick Schoof as he hosts a 'Coalition of the Willing' meeting of international partners on Ukraine at the Foreign, Commonwealth, and Development Office (FCDO) in London, Britain, October 24, 2025.
Henry Nicholls/Pool via REUTERS

As we race toward the end of 2025, voters in over a dozen countries will head to the polls for elections that have major implications for their populations and political movements globally.

The biggest story of our G-Zero world, Ian Bremmer explains, is that the United States – still the world’s most powerful nation – has chosen to walk away from the international system it built and led for three-quarters of a century. Not because it's weak. Not because it has to. But because it wants to.