Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Opinion: Social media warped my perception of reality
Over the past week, the algorithms that shape my social media feeds have been serving up tons of content about the Major League Baseball playoffs. This because the algorithms know that I am a fan of the Mets, who have been -- you should know -- on a surreal playoff run for the last two weeks.
A lot of that content is the usual: sportswriter opinion pieces or interviews with players talking about how their teams are “a great group of guys just trying to go out there and win one game at a time,” or team accounts rallying their fan bases with slick highlight videos or “drip reports” on the players’ fashion choices.
But there’s been a lot of uglier stuff too: Padres and Dodgers fan pages threatening each other after some on-field tension between the two teams and their opposing fanbases last week. Or a Mets fan page declaring “war” on Phillies fans who had been filmed chanting “f*ck the Mets” on their way out of their home stadium after a win. Or a clip of a Philly fan’s podcast in which he mocked Mets fans for failing to make Phillies fans feel "fear" at the Mets' ballpark.
As a person who writes often about political polarization for a living, my first thought upon seeing all this stuff was: aha, further evidence that polarization is fueling a deep anger and violence in American life, which is now bleeding into sports, making players more aggressive and fans more violent.
But in fact, there isn’t much evidence for this. Baseball games and crowds are actually safer now than in the past.
I had fallen for social media reflections of the real world that were distorted. It’s what some experts call “The Funhouse Mirror” aspect of the internet.
One of those experts is Claire Robertson, a postgraduate research fellow in political psychology at NYU and the University of Toronto, who studies how the online world warps our understanding of the offline world.
Since Robertson recently published a new paper on precisely this subject, I called her up to ask why it’s so easy for social media to trick us into believing that things are worse than they actually are.
Part of the problem, she says, is that “the things that get the most attention on social media tend to be the most extreme ones.” And that’s because of a nasty feedback loop between two things: first, an incentive structure for social media where profits depend on attention and engagement; and second, our natural inclination as human beings to pay the most attention to the most sensational, provocative, or alarming content.
“We’ve evolved to pay attention to things that are threatening,” says Robertson. “So it makes more sense for us to pay attention to a snake in the grass than to a squirrel.”
And as it happens, a huge amount of those snakes are released into social media by a very small number of people. “A lot of people use social media,” says Robertson, “but far fewer actually post – and the most ideologically extreme people are the most likely to post.”
People with moderate opinions, which is actually most people, tend to fare poorly on social media, says Robertson. One study, of Reddit, showed that 33% of all content was generated by just 3% of accounts, which spew hate. Another revealed that 80% of fake news on Facebook came from just 0.1% of all accounts.
“But the interesting thing,” she says, “is, what’s happening to the 99.9% of people that aren’t sharing fake news? What's happening to the good actors? How does the structure of the internet, quite frankly, screw them over?”
In fact, we screw ourselves over, and we can’t help it. Blame our brains. For the sake of efficiency, our gray matter is wired to take some shortcuts when we seek to form views about groups of people in the world. And social media is where a lot of us go to form those opinions.
When we get there, we are bombarded, endlessly, with the most extreme versions of people and groups – “Socialist Democrats” or “Fascist Republicans” or “Pro-Hamas Arabs” or “Genocidal Jews” or “immigrant criminals” or “racist cops.” As a result, we start to see all members of these groups as hopelessly extreme, bad, and threatening in the real world too.
Small wonder that Democrats’ and Republicans’ opinions of each other in the abstract have, over the past two decades, gotten so much worse. We don’t see each other as ideological opponents with different views but, increasingly, as existential threats to each other and our society.
Of course, it only makes matters worse when people in the actual real world are committed to spreading known lies – say, that elections are stolen or that legal immigrants who are going hungry are actually illegal immigrants who are eating cats.
But what’s the fix for all of this? Regulators in many countries are turning to tighter rules on content moderation. But Robertson says that’s not effective. For one thing, it raises “knotty” philosophical questions about what should be moderated and by whom. But beyond that, it’s not practical.
“It's a hydra,” she says. “If you moderate content on Twitter, people who want to see extreme content are going to go to FourChan. If you moderate the content on FourChan, they're going to go somewhere else.”
Rather than trying to kill the supply of toxic crap on social media directly, Robertson wants to reduce the demand for it, by getting the rest of us to think more critically about what we see online. Part of that means stopping to compare what we see online with what we know about the actual human beings in our lives – family, friends, neighbors, colleagues, classmates.
Do all “Republicans” really believe the loony theory that Hurricane Milton is a man-made weather event? Or is that just the opinion of one particularly fringe Republican? Do all people calling for an end to the suffering in Gaza really “support Hamas,” or is that the view of a small fringe with outsized exposure on social media?
“When you see something that’s really extreme and you start to think everybody must think that, really think: ‘Does my mom believe that? Do my friends believe that? Do my classmates believe that?’ It will help you realize that what you are seeing online is not actually a true reflection of reality.”
AI and Canada's proposed Online Harms Act
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.
So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.
It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.
Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.
So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.
Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.
I'm Taylor Owen and thanks for watching.
- When AI makes mistakes, who can be held responsible? ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- EU AI regulation efforts hit a snag ›
- Online violence means real-world danger for women in politics - GZERO Media ›
- Social media's AI wave: Are we in for a “deepfakification” of the entire internet? - GZERO Media ›
What is a technopolar world?
Who runs the world? In a series of videos about artificial intelligence, Ian Bremmer, founder and president of GZERO Media and Eurasia Group introduces the concept of a technopolar world––one where technology companies wield unprecedented influence on the global stage, where sovereignty and influence is determined not by physical territory or military might, but control over data, servers, and, crucially, algorithms.
We aren’t yet in a fully technopolar world, but we do exist in a digital order where major tech companies hold sway over standards, operations, interactions, security and economics in the virtual realm. And Bremmer says this is just the beginning. He highlights two key advantages that technology companies have: their dominance over the digital space, which profoundly impacts the lives of billions of people every day, as well as their role in providing critical digital infrastructure required to run a modern economy and society.
As artificial intelligence and other transformative technologies advance, and more and more of our daily life shifts online, Bremmer predicts a shift in power dynamics, where tech companies extend their reach beyond the digital sphere into economics, politics, and even national security. This will almost certainly challenge traditional ideas about global power, which may be determined as much by competition between nation states and tech companies as it is, say, between the US and China. Incorporating tech firms into governance models may be necessary to effectively navigate the complexity of a technopolar world, Bremmer argues. Ultimately, how these companies choose to wield power and their interactions with governments will shape the trajectory of our economic, social, and political futures.
See more of GZERO Media's coverage on artificial intelligence and geopolitics,
Why social media is broken & how to fix it
Social media companies play an outsize role in global politics — from the US to Myanmar. And when they fail, their actions can cost lives.
That's why Frances Haugen blew the whistle against her then-employer, Facebook, when she felt the company hadn't done enough to stop an outrage-driven algorithm from spreading misinformation, hate, and even offline violence.
On GZERO World, Haugen tells Ian Bremmer why governments need to rethink how they regulate social media. A good example is the EU, whose new law mandating data transparency could have global ripple effects.
Haugen also explains why those annoying messages about sharing your cookies are actually a good thing, and why she still believes social media companies can change for the better.
Finally, don't miss her take on Elon Musk having second thoughts about Twitter.
- Big Tech: Global sovereignty, unintended consequences - GZERO ... ›
- GOP battle with Big Tech reaches the Supreme Court - GZERO Media ›
- The Graphic Truth: Twitter doesn't rule the social world - GZERO Media ›
- Be more worried about artificial intelligence - GZERO Media ›
- What is Section 230, the 90's law governing the internet? - GZERO ... ›
- How social media harms democracy - GZERO Media ›
- Norway's school phone ban aims to reclaim "stolen focus", says PM Jonas Støre - GZERO Media ›
What happens in Europe, doesn’t stay in Europe — why EU social media regulation matters to you
The EU just approved the Digital Services Act, which for the first time will mandate social media companies come clean about what they do with our data.
Okay, but perhaps you don't live there. Why should you care?
First, transparency matters, says Facebook whistleblower Frances Haugen.
Second, she tells Ian Bremmer on GZERO World, the EU is not telling social media firms exactly how to change their ways — but rather saying: "We want a different relationship. We want you to disclose risks. We want you to just actually give access to data."
And third, Haugen believes that if it works in Europe, the DSA will help shape law in other parts of the world too.
Watch the GZERO World episode: Why social media is broken & how to fix it
- The next great game: Politicians vs tech companies - GZERO Media ›
- QR codes and the risk to your personal data - GZERO Media ›
- EU & US: democracy frames tech approaches; Australia & Facebook ... ›
- A “techlash” is coming this year - GZERO Media ›
- Was Elon Musk right about Twitter's bots? - GZERO Media ›
- Whistleblowers & how to activate a new era of digital accountability - GZERO Media ›
GOP battle with Big Tech reaches the Supreme Court
Jon Lieber, head of Eurasia Group's coverage of political and policy developments in Washington, discusses Republican states picking fights with social media companies.
Why are all these Republican states picking fights with social media companies?
The Supreme Court this week ruled that a Texas law that banned content moderation by social media companies should not go into effect while the lower courts debated its merits, blocking the latest effort by Republican-led states to try and push back on the power of Big Tech. Florida and Texas are two of the large states that have recently passed laws that would prevent large social media companies from censoring or de-platforming accounts that they think are controversial, which they say is essential for keeping their users safe from abuse and misinformation. The courts did not agree on the constitutionality of this question. One circuit court found that the Florida law probably infringes on the free speech rights of the tech companies.
Yes, companies do have free speech rights under the US constitution while a different circuit court said that the state of Texas did have the ability to dictate how these firms choose how to moderate their platforms. These questions will likely eventually be settled by the Supreme Court who are going to be asked to weigh in on the constitutionality of these laws and if they conflict with the provision of federal law that protects the platforms from liability for content moderation, known as Section 230. But the issue is also likely to escalate once Republicans take control of the House of Representatives next year. These anti-Big Tech laws are part of a broader conservative pushback against American companies who Republicans think have become too left-leaning and way too involved in the political culture wars, most frequently on the side of liberal causes.
And states are taking the lead because of congressional inertia. Democrats are looking at ways to break up the concentrated power of these companies, but lack a path towards a majority for any of the proposals that they've put forward so far this year. Social media, in particular, is in the spotlight because Twitter and Facebook continue to ban the account of former president Donald Trump. And because right-leaning celebrities keep getting de-platformed for what the platforms consider COVID disinformation and lies about the 2020 election.
But recent trends strongly suggest that when Republicans are in charge, they're likely to push federal legislation that will directly challenge the platform's ability to control what Americans see in their social media feeds, a sign that the tech wars have just begun.
The Graphic Truth: Twitter doesn't rule the social world
Elon Musk aside, does anybody else love Twitter? The platform’s 280-character tweets are an essential tool for governments, institutions, politicians, and journalists — as well as eccentric billionaires, of course — but in the grander scheme, not a lot of regular folks are hooked. We look at the brave — and scary — user numbers of social media, where not many care whether you RT’d or simply liked their thread.
Meta's moves to malign TikTok reveal common dirty lobbying practices
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses dirty lobbying practices by the biggest tech companies.
Meta reportedly hired a GOP firm to malign TikTok. How dangerous is this move to the public?
Well, I think it is important that we know these kinds of dirty lobbying practices that apparently looked attractive and acceptable to Meta or Facebook. It seems like a desperate effort to polish a tarnished image of the company and they must have thought that offense is the best defense. But generally, the public, the audience, readers of the news have no way of knowing which stories have been planted or that they are planted in media at all. And I think the fact that this is a common practice is revealing and cynical. But the problem is that for many of the biggest tech companies all kinds of lobbying, sponsoring, influencing has become accessible in ways that very few can compete with, they just have a lot of money to spend. I was surprised to hear, for example, that WhatsApp's lead, Will Cathcart, claimed this week that his company was not heard by European legislators when it came to the Digital Markets Act while a public consultation was held. And Meta, which owns WhatsApp, spent 5.5 million euros on lobbying in Brussels last year. So I'm pretty sure they did have an opportunity to engage.
Now on a different note after this week, you won't be hearing from me with Cyber in 60 for a while. I'm taking leave for personal reasons as well as to focus on writing on my book, about which I'm sure you'll hear later. But there are many other 60 Second videos on other themes that you might appreciate on GZERO Media. And I look forward to reconnecting in the future very soon.
- Is Facebook like a car or a cigarette? - GZERO Media ›
- The next great game: Politicians vs tech companies - GZERO Media ›
- Big Tech's big challenge to the global order - GZERO Media ›
- Graphic Truth: The world is crazy for TikTok - GZERO Media ›
- What We're Watching: Tick Tock for TikTok, Netanyahu loses support ... ›