Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Alleged AI crime rocks Maryland high school
Dazhon Darien, a former athletic director at Pikesville High School in Baltimore County, Maryland, was arrested on April 25 and charged with a litany of crimes related to using AI to frame the school's principal. Darien allegedly created a fake AI voice of Principal Eric Eiswert, used it to generate racist and antisemitic statements, and posted the audio on social media in January. Eiswert was temporarily removed from the school after the audio emerged.
The police allege that Darien used the school’s internet to search for AI tools and sent emails about the recording. The audio was then sent to and posted by a popular Baltimore-area Instagram account on Jan. 17. It’s unclear which tool was used to make the recording, but digital forensics experts said it was clearly fake.
At least 10 states have some form of deepfake laws, though some are focused on pornography. Still, AI-specific charges are rare in the US. Darien was charged with disrupting school activities, theft, retaliation against a witness, and stalking.
Deepfake audio has become a major problem in global elections, but this story demonstrates it can also easily weaponize person-to-person disputes.
OpenAI is risk-testing Voice Engine, but the risks are clear
About a year ago, I was part of a small meeting where I was asked to read a paragraph, sort of random text to me, it seemed. But before I knew it, I heard my own voice very convincingly, saying things through the speakers of the conference room that I had never said and would never say.
And it was really, you know, a sort of goosebump moment because I realized that generative AI used for voice was already very convincing. And that was a prototype of the voice engine, which is now being reported by the New York Times as having been this new product by OpenAi that the company is choosing to only release to a limited set of users as it's still testing the risky uses.
And I don't think this testing with a limited set of users is needed to understand the risks. We've already heard of fraudulent robocalls impersonating President Biden. We've heard of criminals trying to deceive parents, for example, with voice messages sounding like their children who are in trouble and asking for the parent to send money, which then, of course, benefits the criminal group, not their children.
So the risks of using voice impersonation are clear. Of course, companies will also point to opportunities of helping people who may have lost their voice through illness or disability, which I think is an important opportunity to explore. But we cannot be naive about the risks. And so in response to the political robocalls, the Federal Communications Commission at least drew a line and said that AI cannot be used for these. So there are some kind of restriction. But all in all, we need to see more independent assessment of these new technologies, a level playing field for all companies, not just those who want to choose to pace the release of their new models, but also those who want to race ahead. Because sooner or later, one or the other company will and we will all potentially be confronted with this widely accessible, voice generating artificial intelligence opportunity.
So it is a tricky moment when we see the race to bring to market and the rapid development of these technologies, which also incur a lot of risk and harm as an ongoing dynamic in the AI space. And so I hope that as there are discussions around regulation and guardrails happening around the world, that the full spectrum of use cases that we know and can anticipate will be on the table with the aim of keeping people free from crime, our democracy safe, while making sure that if there is a benefit for people in minority disabled communities, that they can benefit from this technology as well.