Source: AFP
OpenAI on Friday unveiled a voice-cloning tool it plans to keep strictly under control until security measures are put in place to prevent audio forgeries meant to trick listeners.
A model called “Voice Engine” can essentially copy someone’s speech based on a 15-second audio sample, according to an OpenAI blog post sharing the results of a small-scale test of the tool.
“We recognize that producing speech that resembles human voices carries serious risks, which are especially important in an election year,” the San Francisco-based company said.
“We’re working with US and international partners from across government, media, entertainment, education, civil society and beyond to ensure we’re incorporating their feedback as we build.”
Disinformation researchers fear rampant abuse of AI-powered apps in a pivotal election year thanks to proliferating voice-cloning tools that are cheap, easy to use and hard to detect.
![](https://images.yen.com.gh/images/b020dafb6183c622.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/b020dafb6183c622.jpg?impolicy=cropped-image&imwidth=256)
Read also
‘Operation Beethoven’: €2.5bn Dutch charm offensive to save ASML
Recognizing these problems, OpenAI said it was “taking a cautious and informed approach to a wider release due to the potential for synthetic voice misuse.”
The cautious revelation came just months after a political consultant working on the long-running presidential campaign of Democratic rival Joe Biden admitted he was behind a robocall impersonating the US leader.
The AI-generated call, the brainchild of an agent for Minnesota Rep. Dean Phillips, featured what sounded like Biden’s voice urging people not to vote in New Hampshire’s January primary.
The incident has alarmed experts who fear a deluge of artificial intelligence-driven deep-fake disinformation in the 2024 White House race, as well as other key elections around the world this year.
OpenAI said partners testing the Voice Engine have agreed to rules, including requiring express and informed consent from any person whose voice is copied using the tool.
![](https://images.yen.com.gh/images/d6ee7f1fa7c1d4dd.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/d6ee7f1fa7c1d4dd.jpg?impolicy=cropped-image&imwidth=256)
Read also
French cyber chief warns Paris Olympics ‘target’
It must also be made clear to the public when the voices they hear are generated by artificial intelligence, the company added.
“We have implemented a set of security measures, including watermarking to identify the origin of any sound produced by the Voice Engine, as well as proactively monitoring how it is used,” OpenAI said.
Source: AFP