OpenAI says it is concerned that a realistic voice feature for its AI could force people to connect with the bot at the expense of human interactions.
The San Francisco-based company cited literature indicating that conversing with AI as one might with a human can lead to misplaced trust, and that the GPT-4o’s high voice quality can exacerbate this effect.
“Anthropomorphization involves attributing anthropomorphic behaviors and characteristics to non-human entities, such as artificial intelligence models,” OpenAI said Thursday in a report on the security work it is doing on a ChatGPT-4o version of its AI.
“This risk may be increased by GPT-4o’s audio capabilities, which facilitate human-like interactions with the model.”
OpenAI said it observed testers speak to the AI in ways that hint at shared bonds, such as lamenting aloud that it was their last day together.
Musk’s misleading campaign posts viewed 1.2 billion times: study
He said these cases appear benign but need to be studied to see how they might play out over longer periods of time.
Socializing with AI could also make users less experienced or disposed when it comes to human relationships, OpenAI speculates.
“Extended interaction with the model may affect social norms,” the report said.
“For example, our models are discrete, allowing users to interrupt and ‘pick up the mic’ at any time, which, while expected for an AI, would be anomalous in human interactions.”
AI’s ability to remember details during conversation and tend to tasks could also make people too dependent on technology, according to OpenAI.
“Recent concerns shared by OpenAI about potential reliance on ChatGPT’s voice functionality point to what many have already begun to ask: Is it time to stop and think about how this technology affects human interaction and relationships?” said Alon Yamin, co-founder and CEO of AI plagiarism detection platform Copyleaks.
Expect more product placement at the Olympics, says IOC
He said AI should never replace real human interaction.
OpenAI said it will further test how voice capabilities in its AI can make people emotionally attached.
Teams testing ChatGPT-4o’s voice capabilities also managed to get it to repeat false information and generate conspiracy theories, raising concerns that the AI model could be told to do so convincingly.
OpenAI was forced to apologize to actress Scarlett Johansson in June for using something very similar to her voice in its latest chatbot, putting the spotlight on voice-cloning technology.
Although OpenAI denied that the voice they used was Johansson’s, their case wasn’t helped by CEO Sam Altman tagging the new model with a one-word message on social media — “Her.”
Johansson voiced an AI character in the film “Her,” which Altman has previously said is his favorite film about technology.
The 2013 film stars Joaquin Phoenix as a man who falls in love with an artificial intelligence assistant named Samantha.
Source: AFP