Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Opinion: Social media warped my perception of reality
Over the past week, the algorithms that shape my social media feeds have been serving up tons of content about the Major League Baseball playoffs. This because the algorithms know that I am a fan of the Mets, who have been -- you should know -- on a surreal playoff run for the last two weeks.
A lot of that content is the usual: sportswriter opinion pieces or interviews with players talking about how their teams are “a great group of guys just trying to go out there and win one game at a time,” or team accounts rallying their fan bases with slick highlight videos or “drip reports” on the players’ fashion choices.
But there’s been a lot of uglier stuff too: Padres and Dodgers fan pages threatening each other after some on-field tension between the two teams and their opposing fanbases last week. Or a Mets fan page declaring “war” on Phillies fans who had been filmed chanting “f*ck the Mets” on their way out of their home stadium after a win. Or a clip of a Philly fan’s podcast in which he mocked Mets fans for failing to make Phillies fans feel "fear" at the Mets' ballpark.
As a person who writes often about political polarization for a living, my first thought upon seeing all this stuff was: aha, further evidence that polarization is fueling a deep anger and violence in American life, which is now bleeding into sports, making players more aggressive and fans more violent.
But in fact, there isn’t much evidence for this. Baseball games and crowds are actually safer now than in the past.
I had fallen for social media reflections of the real world that were distorted. It’s what some experts call “The Funhouse Mirror” aspect of the internet.
One of those experts is Claire Robertson, a postgraduate research fellow in political psychology at NYU and the University of Toronto, who studies how the online world warps our understanding of the offline world.
Since Robertson recently published a new paper on precisely this subject, I called her up to ask why it’s so easy for social media to trick us into believing that things are worse than they actually are.
Part of the problem, she says, is that “the things that get the most attention on social media tend to be the most extreme ones.” And that’s because of a nasty feedback loop between two things: first, an incentive structure for social media where profits depend on attention and engagement; and second, our natural inclination as human beings to pay the most attention to the most sensational, provocative, or alarming content.
“We’ve evolved to pay attention to things that are threatening,” says Robertson. “So it makes more sense for us to pay attention to a snake in the grass than to a squirrel.”
And as it happens, a huge amount of those snakes are released into social media by a very small number of people. “A lot of people use social media,” says Robertson, “but far fewer actually post – and the most ideologically extreme people are the most likely to post.”
People with moderate opinions, which is actually most people, tend to fare poorly on social media, says Robertson. One study, of Reddit, showed that 33% of all content was generated by just 3% of accounts, which spew hate. Another revealed that 80% of fake news on Facebook came from just 0.1% of all accounts.
“But the interesting thing,” she says, “is, what’s happening to the 99.9% of people that aren’t sharing fake news? What's happening to the good actors? How does the structure of the internet, quite frankly, screw them over?”
In fact, we screw ourselves over, and we can’t help it. Blame our brains. For the sake of efficiency, our gray matter is wired to take some shortcuts when we seek to form views about groups of people in the world. And social media is where a lot of us go to form those opinions.
When we get there, we are bombarded, endlessly, with the most extreme versions of people and groups – “Socialist Democrats” or “Fascist Republicans” or “Pro-Hamas Arabs” or “Genocidal Jews” or “immigrant criminals” or “racist cops.” As a result, we start to see all members of these groups as hopelessly extreme, bad, and threatening in the real world too.
Small wonder that Democrats’ and Republicans’ opinions of each other in the abstract have, over the past two decades, gotten so much worse. We don’t see each other as ideological opponents with different views but, increasingly, as existential threats to each other and our society.
Of course, it only makes matters worse when people in the actual real world are committed to spreading known lies – say, that elections are stolen or that legal immigrants who are going hungry are actually illegal immigrants who are eating cats.
But what’s the fix for all of this? Regulators in many countries are turning to tighter rules on content moderation. But Robertson says that’s not effective. For one thing, it raises “knotty” philosophical questions about what should be moderated and by whom. But beyond that, it’s not practical.
“It's a hydra,” she says. “If you moderate content on Twitter, people who want to see extreme content are going to go to FourChan. If you moderate the content on FourChan, they're going to go somewhere else.”
Rather than trying to kill the supply of toxic crap on social media directly, Robertson wants to reduce the demand for it, by getting the rest of us to think more critically about what we see online. Part of that means stopping to compare what we see online with what we know about the actual human beings in our lives – family, friends, neighbors, colleagues, classmates.
Do all “Republicans” really believe the loony theory that Hurricane Milton is a man-made weather event? Or is that just the opinion of one particularly fringe Republican? Do all people calling for an end to the suffering in Gaza really “support Hamas,” or is that the view of a small fringe with outsized exposure on social media?
“When you see something that’s really extreme and you start to think everybody must think that, really think: ‘Does my mom believe that? Do my friends believe that? Do my classmates believe that?’ It will help you realize that what you are seeing online is not actually a true reflection of reality.”
Brazil vs. Musk: Now in low Earth orbit
The battle between Brazil and Elon Musk has now reached the stars — or the Starlink, at least — as the billionaire’s satellite internet provider refuses orders from Brazil’s telecom regulator to cut access to X.
The background: Brazil’s Supreme Court last week ordered all internet providers in Latin America’s largest economy to cut access to X amid a broader clash with the company over an order to suspend accounts that the court says spread hate speech and disinformation.
That order came after X racked up some $3 million in related fines, which Brazil has now tried to collect by freezing the local assets of Starlink, a separate company from X.
Starlink says it won’t comply with the order to block X until those assets are unfrozen and has offered Brazilians free internet service while the dispute continues.
Brazil is one of X’s largest markets, with about 40 million monthly users. But both sides have dug in as this becomes a high-profile battle over free speech vs. national sovereignty.
What’s next? It’s hard for the Brazilian government to stop Starlink signals from reaching users, but it could shutter about two dozen ground stations in the country that are part of the company’s network …Opinion: Pavel Durov, Mark Zuckerberg, and a child in a dungeon
Perhaps you have heard of the city of Omelas. It is a seaside paradise. Everyone there lives in bliss. There are churches but no priests. Sex and beer are readily available but consumed only in moderation. There are carnivals and horse races. Beautiful children play flutes in the streets.
But Omelas, the creation of science fiction writer Ursula Le Guin, has an open secret: There is a dungeon in one of the houses, and inside it is a starving, abused child who lives in its own excrement. Everyone in Omelas knows about the child, who will never be freed from captivity. The unusual, utopian happiness of Omelas, we learn, depends entirely on the misery of this child.
That’s not the end of the tale of Omelas, which I’ll return to later. But the story's point is that it asks us to think about the prices we’re willing to pay for the kinds of worlds we want. And that’s why it’s a story that, this week at least, has a lot to do with the internet and free speech.
On Saturday, French police arrested Pavel Durov, the Russian-born CEO of Telegram, at an airport near Paris.
Telegram is a Wild West sort of messaging platform, known for lax moderation, shady characters, and an openness to dissidents from authoritarian societies. It’s where close to one billion people can go to chat with family in Belarus, hang out with Hamas, buy weapons, plot Vladimir Putin’s downfall, or watch videos of Chechen warlord Ramzan Kadyrov shooting machine guns at various rocks and trees.
After holding Durov for three days, a French court charged him on Wednesday with a six-count rap sheet and released him on $6 million bail. French authorities say Durov refused to cooperate with investigations of groups that were using Telegram to violate European laws: money laundering, trafficking, and child sexual abuse offenses. Specifically, they say, Telegram refused to honor legally obtained warrants.
A chorus of free speech advocates has rushed to his defense. Chief among them is Elon Musk, who responded to Durov’s arrest by suggesting that, within a decade, Europeans will be executed for merely liking the wrong memes. Musk himself is in Brussels’ crosshairs over whether X moderates content in line with (potentially subjective) hate speech laws.
Somewhat less convincingly, the Kremlin – the seat of power in a country where critics of the government often wind up in jail, in exile, or in a pine box – raised the alarm about Durov’s arrest, citing it as an assault on freedom of speech.
I have no way of knowing whether the charges against Durov have merit. That will be up to the French courts to prove. And it is doubtless true that Telegram provides a real free speech space in some truly rotten authoritarian societies (I won’t believe the rumors of Durov’s collusion with the Kremlin until they are backed by something more than the accident of his birthplace.)
But based on what we do know so far, the free speech defense of Durov comes from a real-world kind of Omelas.
Even the most ferocious free speech advocates understand that there are reasonable limitations. Musk himself has said X will take down any content that is “illegal.”
Maybe some laws are faulty or stupid. Perhaps hate speech restrictions really are too subjective in Europe. But if you live in a world where the value of free speech on a platform like Telegram is so high that it should be functionally immune from laws that govern, say, child abuse, then you are picking a certain kind of Omelas that, as it happens, looks very similar to Le Guin’s. A child may pay the price for the utopia that you want.
But at the same time, there’s another Omelas to consider.
On Tuesday, Mark Zuckerberg sent a letter to Congress in which he admitted that during the pandemic, he had bowed to pressure from the Biden administration to suppress certain voices who dissented from the official COVID messaging.
Zuck said he regretted doing so – the sense being that the banned content wasn’t, in hindsight, really worth banning – and that his company would speak out “more forcefully” against government pressure next time.
Just to reiterate what he says happened: The head of the world’s most powerful government got the head of the world’s most powerful social media company to suppress certain voices that, in hindsight, shouldn’t have been suppressed. You do not have to be part of the Free Speech Absolutist Club™ to be alarmed by that.
It’s fair to say, look, we didn’t know then what we later learned about a whole range of pandemic policies on masking, lockdowns, school closures, vaccine efficacy, and so on. And there were plenty of absolutely psychotic and dangerous ideas floating around, to be sure.
What’s more, there are plenty of real problems with social media, hate, and violence – the velocity of bad or destructive information is immense, and the profit incentives behind echo-chambering turn the marketplace of ideas into something more like a food court of unchecked grievances.
But in a world where the only way we know how to find the best answers is to inquire and critique, governments calling audibles on what social media sites can and can’t post is a road to a dark place. It’s another kind of Omelas – a utopia of officially sanitized “truths,” where a person with a different idea about what’s happening may find themselves locked away.
At the end of Le Guin’s story, by the way, something curious happens. A small number of people make a dangerous choice. Rather than live in a society where utopia is built on a singular misery, they simply leave.
Unfortunately, we don’t have this option. We are stuck here.
So what’s the right balance between speech and security that won’t leave anyone in a dungeon?
Should social media apps be labeled dangerous for kids?
US Surgeon General Vivek Murthy is demanding Congress require a safety label on social media apps like cigarettes and alcohol, citing that teens who use them for three hours a day double their risk of depression.
Murthy has a history of advocating for mental health: He issued a similar advisory last year categorizing loneliness as a health crisis comparable to smoking up to 15 cigarettes a day.
So far, Congress hasn’t done much to curb children’s social media usage, apart from chastising a few tech CEOs and targeting TikTok as a national security threat. Murthy’s emergency declaration on Monday was a call for concrete action.
“A surgeon general’s warning label,” Murthy argued in a recent op-ed in the New York Times, “would regularly remind parents and adolescents that social media has not been proved safe.”
Would it work? Labels on tobacco did lead to a steady decline in adolescent cigarette smoking over the past several decades (that is, until vapes came along … but that’s another story). Murthy acknowledged, however, that a warning label alone wouldn’t fix that the average teen spends nearly five hours a day scrolling, and also suggested that schools, family dinners, and anyone in middle school or below, stay phone-free.
What do you think? Should social media apps be labeled as dangerous for children? Let us know here.Are bots trying to undermine Donald Trump?
In an exclusive investigation into online disinformation surrounding the reaction to Donald Trump’s hush-money trial, GZERO asks whether bots are being employed to shape debates about the former president’s guilt or innocence. We investigated, with the help of Cyabra, a firm that specializes in tracking bots, to look for disinformation surrounding the online reactions to Trump’s trial. Is Trump’s trial the target of a massive online propaganda campaign – and, if so, which side is to blame?
_____________
Adult film actress Stormy Daniels testified on Tuesday against former President Donald Trump, detailing her sexual encounter with Trump in 2006 and her $130,000 hush money payment from Trump's ex-attorney Michael Cohen before the 2016 election. In the process, she shared explicit details and said she had not wanted to have sex with Trump. This led the defense team to call for a mistrial. Their claim? That the embarrassing aspects were “extraordinarily prejudicial.”
Judge Juan Merchan denied the motion – but also agreed that some of the details from Daniels were “better left unsaid.”
The trouble is, plenty is being said, inside the courtroom and in the court of public opinion – aka social media. With so many people learning about the most important trials of the century online, GZERO partnered with Cyabra to investigate how bots are influencing the dialogue surrounding the Trump trials. For a man once accused of winning the White House off the steam of Russian meddling, the results may surprise you.
Bots – surprise, surprise – are indeed rampant amid the posts about Trump’s trials online. Cyabra’s AI algorithm analyzed 7,500 posts with hashtags and phrases related to the trials and found that 17% of Trump-related tweets came from fake accounts. The team estimated that these inauthentic tweets reached a whopping 49.1 million people across social media platforms.
Ever gotten into an argument on X? Your opponent might not have been real. Cyabra found that the bots frequently comment and interact with real accounts.
The bots also frequently comment on tweets from Trump's allies in large numbers, leading X’s algorithm to amplify those tweets. Cyabra's analysis revealed that, on average, bots are behind 15% of online conversations about Trump. However, in certain instances, particularly concerning specific posts, bot activity surged to over 32%.
But what narrative do they want to spread? Well, it depends on who’s behind the bot. If you lean left, you might assume most of the bots were orchestrated by MAGA hat owners – if you lean right, you’ll be happy to learn that’s not the case.
Rather than a bot army fighting in defense of Trump, Cyabra found that 73% of the posts were negative about the former president, offering quotes like “I don’t think Trump knows how to tell the truth” and “not true to his wife, not true to the church, not true to the country, just a despicable traitor.”
Meanwhile, only 4% were positive. On the positive posts, Cyabra saw a pattern of bots framing the legal proceedings as biased and painting Trump as a political martyr. The tweets often came in the form of comments on Trump’s allies’ posts in support of the former president. For example, in a tweet from Marjorie Taylor Greene calling the trials “outrageous” and “election interference,” 32% of the comments were made by inauthentic profiles.
Many of the tweets and profiles analyzed were also indistinguishable from posts made by real people – a problem many experts fear is only going to worsen. As machine learning and artificial intelligence advance, so too will the fake accounts and attempts to shape political narratives.
Moreover, while most of the bots came from the United States – it was by no means all of them. The location of some of the bots does not exactly read like a list of usual suspects, with only three in China and zero in Russia (see map below).
Cyabra
This is just one set of data based on one trial, so there are limitations to drawing broader conclusions. But we do know, of course, that conservatives have long been accused of jumping on the bot-propaganda train to boost their political fortunes. In fact, Cyabra noted last year that pro-Trump bots were even trying to sow division amongst Republicans and hurt Trump opponents like Nikki Haley.
Still, Cyabra’s research, both then and now, shows that supporters of both the left and the right are involved in the bot game – and that, in this case, much of the bot-generated content was negative about Trump, which contradicts assumptions that his supporters largely operate bots. It’s also a stark reminder to ensure you’re dealing with humans in your next online debate.
In the meantime, check out Cyabra’s findings in full by clicking the button below.
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
Can the government dictate what’s on Facebook?
The Supreme Court heard arguments on Monday from groups representing major social media platforms which argue that new laws in Florida and Texas that restrict their ability to deplatform users are unconstitutional. It’s a big test for how free speech is interpreted when it comes to private technology companies that have immense reach as platforms for information and debate.
Supporters of the states’ laws originally framed them as measures meant to stop the platforms from unfairly singling out conservatives for censorship – for example when X (then Twitter) booted President Donald Trump for his tweets during January 6.
What do the states’ laws say?
The Florida law prevents social media platforms from banning any candidates for public office, while the Texas one bans removing any content because of a user’s viewpoint. As the 5th Circuit Court of Appeals put it, Florida “prohibits all censorship of some speakers,” while Texas “prohibits some censorship of all speakers.”
Social media platforms say the First Amendment protects them either way, and that they aren't required to transmit everyone’s messages, like a telephone company which is viewed as a public utility. Supporters of the laws say the platforms are essentially a town square now, and the government has an interest in keeping discourse totally open – in other words, more like a phone company than a newspaper.
What does the court think?
The justices seemed broadly skeptical of the Florida and Texas laws during oral arguments. As Chief Justice John Roberts pointed out, the First Amendment doesn’t empower the state to force private companies to platform every viewpoint.
The justices look likely to send the case back down to a lower court for further litigation, which would keep the status quo for now, but if they choose to rule, we could be waiting until June.TikTok videos go silent amid deafening calls for safety guardrails
It's time for TikTokers to enter their miming era. Countless videos suddenly went silent as music from top stars like Drake and Taylor Swift disappeared from the popular app on Thursday. The culprit? Universal Music Group – the world’s largest record company – could not secure a new licensing deal with the powerful information-sharing video platform.
In an open letter published by UMG, it blamed TikTok for “trying to build a music-based business, without paying fair value for the music.” UMG claimed TikTok “responded first with indifference, and then with intimidation” after being pressured not only on artist royalties, but also restrictions about AI-generated content, and a push for user safety.
It’s been a rough week for CEO Shou Zi Chew. He joined CEOs from Meta, X, and Discord for a grilling on Capitol Hill this week over the dangers of abuse and exploitation children are facing on their platforms. Sen. Lindsey Graham went so far as to say these companies have “blood on their hands.” The hearing followed last year’s public health advisory released by the Surgeon General that argued social media presents “a risk of harm” to youth mental health and called for “urgent action” from these companies.
The big takeaway: It appears social media companies are quite agile when under pressure and can change the user experience for billions of people at the drop of a hat, especially when profit margins are involved. Imagine what these companies could do if they put that energy into the health of their users instead.