AI revolutionaries like OpenAI CEO Sam Altman want government regulation, and they want it now – before things get out of hand.
The challenges are many, but they include AI-generated disinformation, harmful biases that wind up baked into AI algorithms, the problems of copyright infringement when AI uses other people’s work as inputs for their own “original” content, and, yes, the “Frankenstein” risks of AI-computers or weapons somehow rebelling against their human masters.
But how to regulate AI is a big question. Broadly, there are three main schools of thought on this. Not surprisingly, they correspond to the world’s three largest economic poles — China, the EU, and the US, each of which has its own unique political and economic circumstances.
China, as an authoritarian government making an aggressive push to be a global AI leader, has adopted strict regulations meant to both boost trust and transparency of Chinese-built AI, while also giving the government ample leeway to police companies and content.
The EU, which is the world’s largest developed market but has few heavyweight tech firms of its own, is taking a “customer-first” approach that strictly polices privacy and transparency while regulating the industry based on categories of risk. That is, an AI judge in a trial deserves much tighter regulation than a program that simply makes you uncannily good psychedelic oil paintings of capybaras.
The US is lagging. Washington wants to minimize the harms that AI can cause, but without stifling the innovative brio of its industry-leading tech giants. This is all the more important since those tech giants are on the front lines of Washington’s broader battle with China for global tech supremacy.
The bigger picture: This isn’t just about what happens in the US, EU, and China. It’s also a three-way race to develop regulatory models that the rest of the world adopts too. So far, Brussels and Beijing are in the lead. Your move, Washington.- Governing AI Before It’s Too Late - GZERO Media ›
- Will consumers ever trust AI? Regulations and guardrails are key - GZERO Media ›
- Should AI content be protected as free speech? - GZERO Media ›
- Grown-up AI conversations are finally happening, says expert Azeem Azhar - GZERO Media ›
- Gemini AI controversy highlights AI racial bias challenge - GZERO Media ›
- Should we regulate generative AI with open or closed models? - GZERO Media ›
- AI is an opportunity to build trust with the Global South: UN's Amandeep Singh Gill - GZERO Media ›
- AI and data regulation in 2023 play a key role in democracy - GZERO Media ›