Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
OpenAI is risk-testing Voice Engine, but the risks are clear
About a year ago, I was part of a small meeting where I was asked to read a paragraph, sort of random text to me, it seemed. But before I knew it, I heard my own voice very convincingly, saying things through the speakers of the conference room that I had never said and would never say.
And it was really, you know, a sort of goosebump moment because I realized that generative AI used for voice was already very convincing. And that was a prototype of the voice engine, which is now being reported by the New York Times as having been this new product by OpenAi that the company is choosing to only release to a limited set of users as it's still testing the risky uses.
And I don't think this testing with a limited set of users is needed to understand the risks. We've already heard of fraudulent robocalls impersonating President Biden. We've heard of criminals trying to deceive parents, for example, with voice messages sounding like their children who are in trouble and asking for the parent to send money, which then, of course, benefits the criminal group, not their children.
So the risks of using voice impersonation are clear. Of course, companies will also point to opportunities of helping people who may have lost their voice through illness or disability, which I think is an important opportunity to explore. But we cannot be naive about the risks. And so in response to the political robocalls, the Federal Communications Commission at least drew a line and said that AI cannot be used for these. So there are some kind of restriction. But all in all, we need to see more independent assessment of these new technologies, a level playing field for all companies, not just those who want to choose to pace the release of their new models, but also those who want to race ahead. Because sooner or later, one or the other company will and we will all potentially be confronted with this widely accessible, voice generating artificial intelligence opportunity.
So it is a tricky moment when we see the race to bring to market and the rapid development of these technologies, which also incur a lot of risk and harm as an ongoing dynamic in the AI space. And so I hope that as there are discussions around regulation and guardrails happening around the world, that the full spectrum of use cases that we know and can anticipate will be on the table with the aim of keeping people free from crime, our democracy safe, while making sure that if there is a benefit for people in minority disabled communities, that they can benefit from this technology as well.