Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Oh BTW, OpenAI got hacked and didn’t tell us
A hacker breached an OpenAI employee forum in 2023 and gained access to internal secrets, according to a New York Times report published Thursday. The company, which makes ChatGPT, told employees but never went public with the disclosure. Employees voiced concerns that OpenAI wasn’t taking enough precautions to safeguard sensitive data — and if this hacker, a private individual, could breach their systems, then so could foreign adversaries like China.
Artificial intelligence companies have treasure troves of data — some more sensitive than others. They collect training data (the inputs on which models learn) and user data (how individuals interact with applications), but also have trade secrets that they want to keep away from hackers, rival companies, and foreign governments seeking their own competitive advantage.
The US is trying hard to limit access to this valuable data, as well as the chip technology that powers training, to friendly countries, and has enacted export controls against China. If lax security at private companies means Beijing can just pilfer the data it needs, Washington will need to modify its approach.
Your face is all over the internet
On the subway, you see someone out of the corner of your eye. Do you recognize them? A former classmate? A coworker from three jobs ago? Maybe a short-lived fling? That question nags in your head: Who are they?
AI has an answer: You covertly snap a photo when they’re not looking and upload it to a facial recognition software that searches millions of webpages for that same unique face. Ping! That face pops up in the background of a photo at Walt Disney World, and there they are at a protest, and there they are on someone’s old Flickr page. Oh, but actually one links to a wedding album. They were in the bridal party. The website is still active. A face. A name. Identity unlocked. You finally figured out who they were – the mystery is solved.
That’s perhaps the most harmless, best-case scenario — and even that’s more than a little bit creepy. But that reality is already here.
Facial recognition services like PimEyes and Clearview AI do just this, using machine learning to sift through enormous troves of faces with startling accuracy. They’re essentially reverse search engines that make your face all that a stranger — or the government — needs to gather your personal information.
I uploaded my face to PimEyes to test it out. The company brags about its creepiness: “For $29.99 a month, PimEyes offers a potentially dangerous superpower from the world of science fiction,” reads a New York Times quote featured prominently on its homepage.
For $300 you get “deep searches” and unlimited access to the software. GZERO ain’t buying it, but a highly motivated individual could pay the full price to find someone, to stalk them, to uncover their identity and whereabouts, and to connect them to a time and place.
Most of the results were pictures I had uploaded: profile pictures for various websites, mainly, as well as photos from my own wedding on our photographer’s website. But there were also a slew of pictures with me in the background of a press conference. In late 2018, I covered CNN reporter Jim Acosta’s court battle to get his White House press pass back. PimEyes surfaced multiple photos of me in the background of Acosta’s interview. The $30 version of PimEyes didn’t shock me, but it was jarring to see my previously unlabeled face from a press conference pop up in less than a minute.
Meanwhile, Clearview AI doesn’t sell directly to the public, instead opting for the lucrative business of selling to law enforcement, government, and public defender offices, according to its website. It’s being used in war right now: Time Magazine wrote that Clearview AI is Ukraine’s “secret weapon” in its conflict with Russia, using the technology to identify Russian soldiers and search for hostages taken across the border.
New York Times reporter Kashmir Hill has written about both companies and told The Verge last year that she’s viewed Clearview AI searches of herself — conducted by the company’s co-founder — and said it was much more extensive than PimEyes and surfaced 160 photos of her “from professional headshots that I knew about to photos I didn’t realize were online.”
In 2011, Google co-founder Eric Schmidt said that facial recognition is the only technology his company had built and decided to stop for ethical reasons. “I’m very concerned personally about the union of mobile tracking and face recognition,” he said, noting that dictators could weaponize it against their own people.
There are positive uses: Prosecutors could use facial recognition to destroy an alibi, or police could use it to find a missing person and their kidnapper. Journalists can find out who was on the scene of key events and track down leads, or quickly put names to faces in the field. But it’s easy to see Schmidt’s fears come to life with an expansive surveillance state that’s always watching.
While there aren’t currently facial recognition laws on the books federally in the US, there are biometric privacy laws in Illinois, Texas, and Washington, which may limit the ways people’s faces can be used online.
Democratic Senators asked the Justice Department earlier this year to look at whether police departments are using facial recognition in a way that curtails civil rights. And the Federal Trade Commission even banned Rite Aid from using facial recognition for five years after it repeatedly and falsely identified women and people of color as shoplifters.
Xiaomeng Lu, director of Eurasia Group’s geo-technology practice, said there are clear benefits for facial recognition technology, such as face-scanning at airports to verify the identities of passengers. She said that “misuse of such tools can violate [individual] privacy,” and regulations such as the European Union’s data privacy law, which deemed facial recognition sensitive data. Ground rules in the US would help address the risks of the technology, Lu added.
The rise of facial recognition technology is quite possibly a step too far in the artificial intelligence boom, something that will make citizens, advocates, and some regulators shudder at its possibilities for abuse. But it also augurs the end of anonymity — where stepping out into the physical world could create another entry in a large database that seemingly anyone can access for a small sum.
New AI toys spark privacy concerns for kids
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks at a new phenomenon in the AI industry: interactive toys powered by AI. However, its interactivity function comes with a host of privacy concerns. According to Owen, it doesn't end there.
So, it's that time of year where I start thinking, admittedly far too late, about my holiday shopping. And because I have a ten-year-old child, this means that I am seeing a lot of ads for new kids’ toys. Kids have had interactive toys for decades. Remember Tickle Me Elmo?
But now these interactive toys are being powered by AI. For example, for $1500, you can buy your kid a Moxie robot. My name is Moxie. I am a new robot. What is your name? Moxie is sort of like a robotic best friend. When your kid talks to it, Moxie records those conversations and then uses technology powered by OpenAI to analyze those interactions and react back.
Embodied, the company that makes Moxie, says that this helps kids regulate their emotions, provides them with companionship, and boost their self-esteem. All of which sounds great, but toys like this should also give us pause. Let me explain. A toy like this comes with a whole host of privacy concerns. Moxie records video and audio of your child and then analyzes that data to create facial expression and user image data.
Now they say they don't store the audio and video recordings, but they do keep the metadata about your child's facial expressions and how they're interacting with the toy. Embodied says it's ultimately parents’ responsibility to ensure that their child isn't giving out personal data. But I don't know., that seems unlikely for a toy that's designed to be your child's digital best friend.
These types of privacy concerns, of course, aren't new. Home assistants like Amazon Alexa and other smart appliances also record and mine your data. And big tech companies aren't likely to move away from this kind of practice, as data collection is essential to their market power. It's pretty clear we're extending this collection practice into the lives of our children.
While privacy concerns with toys like these are well-established, there's another issue that I think requires some thought. How will toys like these affect childhood development? There's a chance these toys could become a powerful tool in helping kids learn and grow. Embodied claims that 71% of the kids that use Moxie saw improved social skills. But this also represents a pretty radical new frontier in childhood development.
What happens when kids are being socialized with robots instead of with other kids? It's often said that AI is going to transform our society, but this may not be a binary event. Sometimes the effect of AI is going to creep into our lives slowly. Kids toys, slowly but surely becoming agents, may be one way this happens.
I'm Taylor Owen and thanks for watching.
Europe's challenge to Facebook; Amazon home drones
Watch as Nicholas Thompson, editor-in-chief of WIRED, explains what's going on in technology news:
Would Facebook actually leave Europe? What's the deal?
The deal is that Europe has told Facebook it can no longer transfer data back and forth between the United States and Europe, because it's not secure from US Intelligence agencies. Facebook has said, "If we can't transfer data back and forth, we can't operate in Europe." My instinct, this will get resolved. There's too much at stake for both sides and there are all kinds of possible compromises.
An Amazon home drone. Why would I need that and are you concerned about privacy?
Amazon has just announced a new drone that flies with the camera room to room in your apartment, home, looking for disturbances. Why would you need it? If you're really worried about a burglar, worried about a raccoon. Why should you be scared about privacy? Because it will be filming all your stuff and maybe linking it to your Amazon account. My concern about it? Look, it's cool technology, but I'd much rather get a dog.
Technological Revolution & Surveillance in the COVID-19 Era
Are we in the middle of a technological revolution?
Yes? I feel like a technological revolution should feel more empowering and exciting. It should feel like something good as opposed to something catastrophic. But if you define it as a moment when there's a lot of technological change that will last for years or decades, yes. Think about the way that health, education, working from home are going to change. There are lots of inventions right now because of coronavirus that will stick with us.
With the need for increased surveillance, will microchipping become a thing?
Microchipping is where you put a little microchip inside your body and you can use it to scan yourself in, you can embed data in it, you can use near-field identification. But no, it's not going to become a thing because you can do all that in your phone. Put the microchip in your phone. Carry the phone in your pocket or put it in your watch. Putting it in your skin is unnecessary and kind of gross.
Marietje Schaake on Digital Data Rights
Marietje Schaake, former member of EU Parliament and international policy director of the Cyber Policy Center at Stanford University, discusses the regulation and oversight required to ensure that offline rights are protected in cyberspace as well, including the avoidance of microtargeting based on race, gender, or health status. In an interview with Ian Bremmer for GZERO World, she argues that fair competition, non-discrimination, and adherence to human rights laws are uneven and lacking in the online world.
Who is responsible for protecting personal and sensitive data? Who is liable? Do already powerful tech platforms have too much power?
Surveillance vs privacy during the COVID-19 pandemic
In an interview with Ian Bremmer for GZERO World, Marietje Schaake, former member of EU Parliament and international policy director of the Cyber Policy Center at Stanford University, discusses the tradeoff between security and freedom when it comes to data surveillance. In a wide-ranging conversation about data and big tech, taped just days before cities entered lockdown in the United States, Schaake addresses early steps taken in Singapore and China to curb the spread of COVID-19 using tracking tools.
The complete discussion is part of the latest episode of GZERO World which airs on US public television. Check local listings.