Digital Governance
How tech companies aim to make AI more ethical and responsible

How tech companies aim to make AI more ethical and responsible | Global Stage | GZERO Media

Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.
Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.
And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
Two weeks ago, President Donald Trump launched a war of choice to topple Iran's regime expecting a quick, clean win.
Last week, Microsoft, Europol, and industry partners took coordinated action to disrupt Tycoon 2FA, a major phishing‑as‑a‑service operation designed to bypass multifactor authentication. Active since 2023, the service fueled large‑scale online impersonation, enabling fraud, data theft, and disruptions across sectors, including healthcare and education. Acting under a US court order, the coalition seized hundreds of domains that powered Tycoon 2FA’s infrastructure — underscoring the need for global, public‑private cooperation to counter industrialized cybercrime and protect digital trust. Read the full blog here.
Australian mining giant Lynas will sell rare earths to Japan for 12 years in a major pact meant to chip away at China’s dominance of the global market.