Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.
Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.
And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- Why Big Tech companies are like “digital nation states” ›
- Will consumers ever trust AI? Regulations and guardrails are key ›
- The UN will discuss AI rules at this week's General Assembly ›
- The AI power paradox: Rules for AI's power ›
- How should artificial intelligence be governed? ›