Source: AFP
OpenAI CEO Sam Altman has defended his company’s AI technology as safe for widespread use, as concerns grow about potential risks and the lack of proper safeguards for ChatGPT-style AI systems.
Altman’s remarks came at a Microsoft event in Seattle, where he spoke to developers just as a new controversy erupted over an OpenAI AI voice that closely resembled that of actress Scarlett Johansson.
The CEO, who rose to global prominence after the launch of OpenAI’s ChatGPT in 2022, also faces questions about the company’s AI security after the team responsible for mitigating long-term AI risks left.
“My biggest piece of advice is that this is a special moment and seize it,” Altman told the audience of developers seeking to build new products using OpenAI’s technology.
![](https://images.yen.com.gh/images/29f11a34db2b6912.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/29f11a34db2b6912.jpg?impolicy=cropped-image&imwidth=256)
Read also
OpenAI apologizes to actress Johansson for AI voice simile
“This is not the time to delay what you plan to do or wait for the next thing,” he added.
OpenAI is a close partner of Microsoft and provides the core technology, mainly the GPT-4 large language model, to build AI tools.
Microsoft has jumped on the AI bandwagon, promoting new products and urging users to embrace the potential of genetic AI.
“We take for granted” that GPT-4, while “far from perfect … is generally considered robust enough and secure enough for a wide variety of uses,” Altman said.
Altman insisted that OpenAI had done “tremendous work” to ensure the safety of its models.
“When you take a drug, you want to know what’s going to be safe, and with our model, you want to know that it’s going to be strong enough to behave the way you want it to,” he added.
![](https://images.yen.com.gh/images/fc40037e3875ff06.jpg?impolicy=cropped-image&imwidth=256)
![](https://images.yen.com.gh/images/fc40037e3875ff06.jpg?impolicy=cropped-image&imwidth=256)
Read also
16 leading AI companies make new security commitments at Seoul Summit
But questions about OpenAI’s commitment to security resurfaced last week when the company disbanded its “hyper-alignment” team, a team dedicated to mitigating the long-term risks of artificial intelligence.
Announcing his departure, team co-leader Jan Leike criticized OpenAI for prioritizing “shiny new products” over security in a series of posts on X (formerly Twitter).
“For the past few months, my team has been sailing against the wind,” Leike said.
“These problems are very difficult to fix, and I’m concerned that we’re not on track to get there.”
That controversy quickly followed a public statement from Johansson, who expressed outrage over a voice used by OpenAI’s ChatGPT that sounded similar to her voice in the 2013 film “Her.”
The voice in question, called “Sky,” appeared last week in the release of OpenAI’s more humanoid GPT-4o model.
In a brief statement on Tuesday, Altman apologized to Johansson but insisted the voice was not based on hers.
Source: AFP