Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Why don’t we want more “accuracy” at the ballpark – or in the courtroom?
It’s baseball season again, and that means it’s time again to embrace the chronic self-harm of being a Mets fan (already off to a stellar 1-6 start), but also, this year especially, to ponder the ways in which technology risks making some things worse by making other things better.
That’s because this season was originally supposed to be the one where Major League Baseball began introducing robot umpires to call balls and strikes. The idea was to use new technology to make an old game more perfect, less arbitrary, more objective.
But after a few seasons of trials in the minor leagues, the robots’ march to the Majors slowed. It turns out, players and managers weren’t as thrilled about putting about Hal 9000 behind the plate as MLB thought.
The challenges of defining an objective and consistent strike zone, and the misgivings about removing human judgment altogether, have pushed back the robots’ debut for at least another season – if not more.
To be up front, I think that’s a good thing. Maybe I’m just yelling at clouds here, but to me the subtle arbitrariness of a strike zone – unlike, say, the objective reality of a safe/out call – is an intrinsic part of the short story that is a baseball game.
But all of this got me thinking about another more consequential area where people’s appetite for technological “accuracy” is milder than you’d think: the courtroom.
Of all the institutions in a democracy, judges and courts have perhaps the highest duty to be – and to be seen as – impartial. And yet a growing number of Americans no longer see the bench that way. Overall, fewer than 50% of Americans say they trust the judicial branch of the federal government, the lowest mark on record, and only half of Americans say criminal suspects are treated “fairly” – down from roughly two-thirds at the turn of the century.
Each side has its grievances. For many Republicans, the Biden administration has co-opted the courts as part of a banana republic style bid to sideline Trump with frivolous legal charges. Democrats, meanwhile, see the Supreme Court as hopelessly illegitimate and biased because Trump gave it an overtly conservative majority that, as anticipated, rolled back Roe v. Wade.
Can technology help? In one small corner of the justice system at least, it seems so. For several years now, AI programs have been used to assist judges in specific areas – such as determining bail – where machine learning can use vast amounts of past data to make predictions about future behavior.
America already jails more people than any country on earth, and fully a quarter of those in prison are merely awaiting trial, often for months at a time. But studies show that AI can improve things.
In one survey from 2017, an AI program was 25% more accurate than human judges when it came to predicting whether suspects released on bail would flee or commit more crimes. Using AI would also, the study found, have safely reduced the pretrial prison population by some 40%. Several states have found that using algorithms can help reduce pre-trial prison populations without an increase in crime.
That’s all good. But there’s one problem: People in general still don’t seem to want robots in the courtroom. A YouGov study from last year showed that barely 1 in 5 Americans thought a robot would “be a better judge,” while 56% preferred a “human who can use their emotion and instinct.” A broad survey of judges and other court workers found that two thirds were skeptical about using AI in the courtroom, citing concerns about accuracy and emotional intelligence.
There are, of course, problems with AI in criminal justice. One is the risk of bias. After all, if AI is what it eats, then training AI modules on decades of policing and court data shaped by systemic racial or socio-economic biases risks teaching robots to amplify them further. The ACLU has highlighted this problem in algorithms used for policing and pre-trial detention.
Another issue is transparency. These algorithms are a black box – private sector trade secrets guarded as closely by tech companies as KFC protects its spice recipe or Coke guards its formula. If a computer decides to jail you, you’d probably never know why – was it an AI hallucination that shipped you off to Rikers Island?
But a big issue is more basic: People just aren’t comfortable with a box of ones and zeros making decisions that depend on assessments of our character, our emotions, or humanity. There’s a kind of alienation there that people aren’t comfortable with, even if the accuracy is greater and the societal benefits can be modeled.
The problem is a delicate one. On the one hand, if we’re forgoing real improvements – fewer people in jail and safer streets at the same time – we are needlessly harming large numbers of people over a mistaken belief in the ability of people to judge other people fairly.
But in a deeply polarized and increasingly mistrustful society, introducing technologies that aim to make our institutions more accurate may also, paradoxically, cause people to trust them less.
Let me know what you think about robots at the ballpark or in the courtroom here – if you include your name and city, we may run your response in a future edition of the GZERO Daily newsletter.
How are emerging technologies helping to shape democracy?
How do you know that what you are seeing, hearing, and reading is real?
It’s not an abstract question: Artificial intelligence technology allows anyone with an internet connection and a half-decent laptop to fabricate entirely fictitious video, audio, and text and spread it around the world in the blink of an eye.
The media may be ephemeral, but the threat to governments, journalists, corporations, and you yourself is here to stay. That’s what Julien Pain, journalist and host of Franceinfo, tried to get at during the GZERO Global Stage discussion he moderated live from the 2023 Paris Peace Forum.
In response to a poll that showed 77% of the GZERO audience felt democracies are weakening, Eléonore Caroit, vice president of the French Parliament’s Foreign Affairs Committee, pointed out that the more alarming part is many people around the globe are sufficiently frightened to trade away democratic liberties for the purported stability of unfree governments — a trend authoritarian regimes exploit using AI.
“Democracy is getting weaker, but what does that provoke in you?” she asked. “Do you feel protected in an undemocratic regime? Because that is what worries me, not just that democracy is getting weaker but that fewer people seem to care about it.”
Ian Bremmer, president and founder of the Eurasia Group and GZERO Media, said a lot of that fear stems from an inability to know what to trust or even what is real as fabricated media pervades the internet. The very openness that democratic societies hold as the keystone of their civic structures exacerbates the problem.
“Authoritarian states can tell their citizens what to believe. People know what to believe, the space is made very clear, there are penalties for not believing those things,” Bremmer explained. “In democracies, you increasingly don’t know what to believe. What you believe has become tribalized and makes you insecure.”
Rappler CEO Maria Ressa, who is risking a century-long prison sentence to fight state suppression of the free press in the Philippines, called information chaos in democracies the “core” of the threat.
“Technology has taken over as the gatekeeper to the public sphere,” she said “They have abdicated responsibility when lies spread six times faster than the truth” on social media platforms.
Microsoft vice chair and president Brad Smith offered a poignant example from Canada, in which a pro-Ukraine activist was targeted by Russia with AI-generated audio of a completely fabricated statement. They spliced it into a real TV broadcast and spread the clip across social media to discredit the activist’s work of years within minutes.
The good news, Smith said, is that AI can also be used to help fight disinformation campaigns.
“AI is an extraordinarily powerful tool to identify patterns within data,” he said. “For example, after the fire in Lahaina, we detected the Chinese using an influence network of more than a hundred influencers — all saying the same thing at the same time in more than 30 different languages” to spread a conspiracy theory that the US government deliberately started the blaze.”
All the panelists agreed on one crucial next step: aligning all the stakeholders — many with competing interests and a paucity of mutual trust — to create basic rules of the road on AI and how to punish its misuse, which will help ordinary people rebuild trust and feel safer.
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- Stop misinformation blame game — let's do something about it ›
- Christchurch Call had a global impact on tech giants - Microsoft's Brad Smith ›
- What does democracy look like in Modi's India? ›
- Ian Bremmer: How AI may destroy democracy ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa - GZERO Media ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa - GZERO Media ›
- How AI threatens elections - GZERO Media ›
- Paris Peace Forum Director General Justin Vaïsse: Finding common ground - GZERO Media ›
- At the Paris Peace Forum, grassroots activists highlight urgent issues - GZERO Media ›
- UN's Rebeca Grynspan on the world’s debt crisis: Can it be solved? - GZERO Media ›