Digital Governance
How tech companies aim to make AI more ethical and responsible

How tech companies aim to make AI more ethical and responsible | Global Stage | GZERO Media

Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.
Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.
And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
Xi Jinping will welcome Donald Trump with lots of pomp and circumstance. The summit, though, will be short on substance.
Israel used AI in Gaza in a way that felt "potentially uncomfortable for the US military tradition" says Bloomberg reporter Katrina Manson.
Ian Bremmer breaks down the complicated reality inside Venezuela after Nicolás Maduro’s removal from power. While the Trump administration sees the operation as a major foreign policy victory, Ian argues the harder challenge is only beginning; turning Venezuela into a stable economy and a representative democracy.
Even Eurovision cannot escape geopolitics, South Africa’s constitutional court opens door to Ramaphosa impeachment vote, Zelensky’s former right-hand man accused in corruption probe