That robot sounds just like you

First, OpenAI tackled text with ChatGPT, then images with DALL-E. Next, it announced Sora, its text-to-video platform. But perhaps the most pernicious technology is what might come next: text-to-voice. Not just audio — but specific voices.

A group of OpenAI clients is reportedly testing a new tool called Voice Engine, which can mimic a person’s voice based on a 15-second recording, according to the New York Times. And from there it can translate the voice into any language.

The report outlined a series of potential abuses: spreading disinformation, allowing criminals to impersonate people online or over phone calls, or even breaking voice-based authenticators used by banks.

In a blog post on its own site, OpenAI seems all too aware of the potential for misuse. Its usage policies mandate that anyone using Voice Engine obtain consent before impersonating someone else and disclose that the voices are AI-generated, and OpenAI says it’s watermarking all audio so third parties can detect it and trace it back to the original maker.

But the company is also using this opportunity to warn everyone else that this technology is coming, including urging financial institutions to phase out voice-based authentication.

AI voices have already wreaked havoc in American politics. In January, thousands of New Hampshire residents received a robocall from a voice pretending to be President Joe Biden, urging them not to vote in the Democratic primary election. It was generated using simple AI tools and paid for by an ally of Biden's primary challenger Dean Phillips, who has since dropped out of the race.

In response, the Federal Communications Commission clarified that AI-generated robocalls are illegal, and New Hampshire’s legislature passed a law on March 28 that requires disclosures for any political ads using AI.

So, what makes this so much more dangerous than any other AI-generated media? The imitations are convincing. The Voice Engine demonstrations so far shared with the public sound indistinguishable from the human-uttered originals — even in foreign languages. But even the Biden robocall, which its maker admitted was made for only $150 with tech from the company ElevenLabs, was a good enough imitation.

But the real danger lies in the absence of other indicators that the audio is fake. With every other AI-generated media, there are clues for the discerning viewer or reader. AI text can feel clumsily written, hyper-organized, and chronically unsure of itself, often refusing to give real recommendations. AI images often have a cartoonish or sci-fi sheen, depending on their maker, and are notorious for getting human features wrong: extra teeth, extra fingers, and ears without lobes. AI video, still relatively primitive, is infinitely glitchy.

It’s conceivable that each of these applications for generative AI improves to a point where they’re indistinguishable from the real thing, but for now, AI voices are the only iteration that feels like it could become utterly undetectable without proper safeguards. And even if OpenAI, often the first to market, is responsible, that doesn’t mean all actors will be.

The announcement of Voice Engine, which doesn’t have a set release date, as such, feels less like a product launch and more like a warning shot.

More from GZERO Media

Walmart’s $350 billion commitment to American manufacturing means two-thirds of the products we buy come straight from our backyard to yours. From New Jersey hot sauce to grills made in Tennessee, Walmart is stocking the shelves with products rooted in local communities. The impact? Over 750,000 American jobs - putting more people to work and keeping communities strong. Learn more here.

People gather at a petrol station in Bamako, Mali, on November 1, 2025, amid ongoing fuel shortages caused by a blockade imposed by al Qaeda-linked insurgents.
REUTERS/Stringer

Mali is on the verge of falling to an Islamist group that has pledged to transform the country into a pre-modern caliphate. The militant group’s momentum has Mali’s neighbors worried.

Last week, Microsoft released the AI Diffusion Report 2025, offering a comprehensive look at how artificial intelligence is spreading across economies, industries, and workforces worldwide. The findings show that AI adoption has reached an inflection point: 68% of enterprises now use AI in at least one function, driving measurable productivity and economic growth. The report also highlights that diffusion is uneven, underscoring the need for greater investment in digital skills, responsible AI governance, and public-private collaboration to ensure the benefits are broadly shared. Read the full report here.

- YouTube

At the 2025 Abu Dhabi Global AI Summit, UNCTAD Secretary-General Rebeca Grynspan warns that without deliberate action, the world’s poorest countries risk exclusion from the AI revolution. “There is no way that trickle down will make the trick,” she tells GZERO Media’s Tony Maciulis. “We have to think about inclusion by design."

- YouTube

In this Global Stage panel recorded live in Abu Dhabi, Becky Anderson (CNN) leads a candid discussion on how to close that gap with Brad Smith (Vice Chair & President, Microsoft), Peng Xiao (CEO, G42), Ian Bremmer (President & Founder, Eurasia Group and GZERO Media), and Baroness Joanna Shields (Executive Chair, Responsible AI Future Foundation).