Search
AI-powered search, human-powered content.
scroll to top arrow or icon

So You Want to Prevent a Dystopia?

So You Want to Prevent a Dystopia?

When it comes to artificial intelligence there's good news and bad news. On the plus side, AI could save millions of lives a year by putting robots behind the wheels of cars or helping scientists discover new medicines. On the other hand, it could put you under surveillance, because a computer thinks your recent behavior patterns suggest you might be about to commit a crime.

So, how to reap the benefits and avoid the dystopia? It's a question of how AI systems are built, what companies and governments do with them, and how they handle basic problems of privacy, fairness, and accountability. Here's a quick rundown of how different countries (or groups of countries) are approaching the challenge of putting ethical guardrails around AI.


The European Union is trying to do the same thing in AI that it's already done on digital privacy: Putting citizens' rights first – but without scaring off the tech companies that can also deliver AI's benefits. A new set of ethical guidelines published this week gives AI engineers checklists they can use to make sure they are on the right track on issues like privacy and data quality, though it stopped short of blacklisting certain applications. Toothy regulation this is not, but just getting these ethical questions mapped out on official EU letterhead is a start. Although the guidelines are voluntary, one of the architects behind the bloc's data privacy policies has argued that legal heft will eventually be required to keep AI safe for people and to uphold democracy.

The US, meanwhile, is taking its usual hands-off approach. The Trump administration has asked bureaucrats to develop better technical standards for "trustworthy" AI, but it doesn't directly broach the subject of ethics. But in the private sector there's been progress: the IEEE, an international standards organization, recently dropped a 300-page bomb of "Ethically Aligned Design" thinking, which lists eight general principles that designers should follow, including respect for human rights, giving people control over their data, and guarding against potential abuse. Still, it's a thorny challenge. Google's AI ethics board was recently scuttled after employees objected to conservative board member's views on transgender rights and immigration.

Then there's China, where bureaucrats are wrestling with ethical issues like data privacy and transparency in AI algorithms, too. Like the EU, China wants to get out front on global regulation – partly because it thinks its internet companies will grow faster if it can set standards for AI, and partly because Beijing doesn't want a rerun of the situation from 30 years ago when other counties set the rules of the road for the internet first. But while China may share European views on policing bias in algorithms, there is likely to be a sharper difference on issues like privacy, "moral" or "ethical" definitions in the AI world, and how ethics norms should be enforced.

The bottom line: Defining and enforcing acceptable boundaries of AI is a long term challenge, but the guardrails that governments and industry put in place early on may determine whether we're heading for a new era of human progress or a mash-up of Blade Runner and Minority Report.