Sam Altman, CEO of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, United States, Tuesday, September 23, 2025.
Kyle Grillot | Bloomberg | Getty Images
Adult ChatGPT users will soon be able to access a less censored version of the artificial intelligence chatbot, which will include erotic material, OpenAI CEO Sam Altman announced, in an apparent policy shift.
“In December, as we further enforce the age restriction and as part of our principle of ‘treating adult users like adults,’ we will allow even more, like erotica for verified adults,” Altman said in a social media post Tuesday.
While it is unclear what material will be considered permitted erotic content, the move could represent a major change in OpenAI’s policy, which previously prohibited this type of content in most contexts.
According to Altman, existing versions of ChatGPT were made “fairly restrictive” to protect users from mental health risks, but this approach made the chatbot “less useful (and enjoyable for many users who had no mental health issues).
“Now that we have been able to alleviate serious mental health issues and have new tools, we will be able to ease restrictions safely in most cases,” he said.
These “new tools” appear to refer to security features and parental controls rolled out last month to address concerns about the chatbot’s impact on young users’ mental health.
However, as safeguards for minors expand, it appears Altman is ready for ChatGPT to take a softer approach for adults.
OpenAI hinted at a change in February when the language on its “Model Spec” page was updated to clarify that, in order to “maximize the freedom” of users, only sexual content involving minors was prohibited. Yet erotica was considered “sensitive content” to be generated only in certain permitted contexts.
In addition to the December rollout, Altman also announced that a new version of ChatGPT would launch in the coming weeks, allowing the chatbot to adopt more distinct personalities, building on updates from the latest version of GPT‑4o.
“If you want your ChatGPT to respond in a very human way, use a ton of emoji, or act like a friend, ChatGPT should do it,” he said. “But only if you want to.“
Growth versus security
After Altman’s post on Tuesday, some social media users were quick to point out his previous statements suggesting that ChatGPT would not implement sexualized chat features, unlike competing models such as xAI’s Grok.
In an interview in August, independent technology journalist Cleo Abram asked Altman to give an example of a decision he made that was best for the world, but not for winning the AI race.
“Well, we haven’t put a sex robot avatar in ChatGPT yet,” Altman said in an apparent nod to the provocative AI sidekicks released by Elon Musk’s xAI.
Altman’s policy change comes at an important time for OpenAI, as its security practices are already facing increased scrutiny. In September, the Federal Trade Commission launched an investigation into several technology companies, including OpenAI, over potential risks to children and adolescents.
This followed a lawsuit filed by a California couple who alleged that ChatGPT contributed to the suicide of their 16-year-old son.
OpenAI also announced Tuesday the creation of an eight-member Wellbeing and AI Expert Council to advise the company on how artificial intelligence affects users’ mental health, emotions and motivation.
The council will guide OpenAI in defining what healthy interactions with AI look like through check-ins and recurring meetings, the company said.