Tech

OpenAI delays ChatGPT voice mode citing security tests

SAN FRANCISCO — OpenAI announced Tuesday that it would delay the launch of voice and emotion reading features for its ChatGPT chatbot because more time was needed for security testing.

The company first showed off its tools last month in a demo that sparked excitement from ChatGPT users, but also the threat of a lawsuit from actress Scarlett Johansson, who claimed that the company had copied his voice for one of its artificial intelligence characters.

OpenAI had originally planned to offer the new features to some paying subscribers in late June, but is postponing that initial release for a month, the company said in a statement on X. The features will be available to all paying users in the fall , the company said. ” said, while adding the caveat that “exact timelines depend on meeting our high standards of safety and reliability.”

OpenAI first added the ability to speak with one of several synthetic voices, or “personas,” to ChatGPT late last year. The May demo used one of these voices to demonstrate a newer, better AI system called GPT-4o that allowed the chatbot to speak in expressive tones, responding to voice tone and facial expressions. ‘a person and have more complex conversations. One of the voices, which OpenAI called Sky, resembles the voice of an AI robot played by Johansson in the 2013 film “Her,” which tells the story of a lonely man who falls in love with his assistant AI.

OpenAI CEO Sam Altman denied that the company trained the robot on Johansson’s voice. The Washington Post reported last month that the company had hired another actor to provide audio training, according to internal files and interviews with casting directors and the actor’s agent.

TO CATCH UP

Stories to keep you informed

As the world’s biggest tech companies and upstarts like OpenAI fight to compete in generative AI, several projects are running into unexpected obstacles. Last month, Google reduced how often it displays AI-generated answers at the top of search results, after the tool made strange errors, like asking people to put glue on their pizza. In February, the search company removed an AI image generator criticized for creating images such as a female pope. Microsoft made changes to its own AI chatbot last year after it sometimes provided bizarre and aggressive responses.

OpenAI said Tuesday it needed more time to improve the new voice version of its chatbot, which can detect and block certain content, without disclosing details. Many AI tools have been criticized for making up false information, spreading racist or sexist content, or displaying bias in their results. Designing a chatbot that attempts to interpret and imitate emotions increases the complexity of its interactions, opening the door to new errors.

“ChatGPT’s advanced voice mode can understand and respond with emotions and non-verbal cues, bringing us closer to natural, real-time conversations with AI,” OpenAI said in its release. “Our mission is to bring you these new experiences in a thoughtful way.”

News Source : www.washingtonpost.com
Gn tech

Back to top button