A new poll on AI raises one of the most critical questions of 2024: Do people want to regulate AI, and if so, who should do it?
For all the wars, elections, and crises going on, the most profound long-term transition going on right now is the light-speed development of AI and its voracious news capabilities. Nothing says a new technology has arrived more than when Open AI CEO Sam Altman claimed he needs to fabricate more semiconductor chips so urgently that … he requires $7 trillion.
Seven. Trillion. Dollars. A moment of perspective, please.
$7 trillion is more than three times the entire GDP of Canada and more than twice the GDP of France or the UK. So … it may be pocket change to the Silicon Valley technocrat class, but it’s a pretty big number to the rest of us.
Seven trillion dollars has a way of focusing the mind, even if the arrogance of even floating that number is staggering and, as we covered this week, preposterous. Still, it does give you a real sense of what is happening here: You will either be the AI bulldozer or the AI road. Which is it? So how do people feel about those options?
Conflicted is the answer: GZERO got access to a new survey from our partners at Data Sciences, which asked people in Canada about Big Tech, the government, and the AI boom. Should AI be regulated or not? Will it lead to job losses or gains? What about privacy? The results jibe very closely with similar polls in the US.
In general, the poll found people appreciate the economic and job opportunities that AI and tech are creating … but issues of anxiety and trust break down along generational lines, with younger people more trusting of technology companies than older people. That’s to be expected. I may be bewildered by my mom’s discomfort when I try to explain to her how to dictate a voice message on her phone, but then my kids roll their eyes at my attempts to tell them about issues relating to TikTok or Insta (Insta!, IG, On the ‘Gram, whatevs …) Technology, like music, is by nature generational.
But all tech companies are not equal. Social media companies score much lower when it comes to trust. For example, most Canadians say they trust tech companies like Microsoft, Amazon, or Apple, but less than 25% say they trust TikTok, Meta, or Alibaba. Why?
First, it’s about power. 75% of people agree that tech companies are “gaining excessive power,” according to the survey. Second, people believe there is a lack of transparency, accountability, and competition, so they want someone to do something about it. “A significant majority feel these companies are gaining excessive power (75% agree) and should face stronger government regulations (70% agree),” the DS survey says. “This call for government oversight is universal across the spectrum of AI usage.”
This echoes a Pew Research poll done in the USin November of 2023 in which 67% of Americans said they fear the government will NOT go far enough in regulating AI tech like ChatGPT.
So, while there is some consensus regarding the need to regulate AI, there is a diminishing number of people who actually trust the government to regulate it. Another Pew survey last September found that trust in government is the lowest it has been in 70 years of polling. “Currently, fewer than two-in-ten Americans say they trust the government in Washington to do what is right ‘just about always’ (1%) or ‘most of the time’ (15%).”
Canada fares slightly better on this score, but still, if you don’t trust the digital cops, how do you keep the AI streets safe?
As we covered in our 2024 Top Risks, Ungoverned AI, there are multiple attempts to regulate AI right now all over the world, from the US, the UN, and the EU, but there are two major obstacles to any of this working: speed and smarts. AI technology is moving like a Formula One car, while regulation is moving like a tricycle. And since governments struggle to keep up with the actual innovative new software engineering, they need to recruit the tech industry itself to help write the regulations. The obvious risk is here regulatory capture, where the industry-influenced policies become self-serving. Will news rules protect profits or the public good, or, in the best-case scenario, both? Or, will any regulations, no matter who makes them, be so leaky that they are essentially meaningless?
All this is a massive downside risk, but on the upside, it’s also a massive opportunity. If governments can get this right – and help make this powerful new technology more beneficial than harmful, more equitable than elitist, more job-creating than job-killing – they might regain the thing they need most to function productively: public trust.