How tech companies aim to make AI more ethical and responsible

How tech companies aim to make AI more ethical and responsible | Global Stage | GZERO Media

Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.

Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.

And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.

Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call

More from GZERO Media

- YouTube

How do we ensure AI is trustworthy in an era of rapid technological change? Baroness Joanna Shields, Executive Chair of the Responsible AI Future Foundation, says it starts with principles of responsible AI and a commitment to ethical development.

October 21, 2025: The owner of this cattle feedlot in Sergeant Bluff, Iowa, USA, used to fly a Trump/Vance flag. The Trump/Vance flag is no longer flying at the feedlot.

Jerry Mennenga/ZUMA Press Wire

These days, US farmers aren’t just worried about the weather jeopardizing their harvests. They’re keeping a close eye on geopolitical storms as well.

The United States is #winning. But while the short-term picture looks strong, the United States is systematically trading long-term strategic advantages for more immediate tactical gains, with the accumulating costs hiding in plain sight.

- YouTube

Who really shapes and influences the development of AI? The creators or the users? Peng Xiao, Group CEO, G42 argues it’s both. “I actually do not subscribe that the creators have so much control they can program every intent into this technology so users can only just respond and be part of that design,” he explains at the 2025 Abu Dhabi Global AI Summit.