OpenAI announces plan to allow sexual content on ChatGPT
OpenAI, the company behind AI chatbot ChatGPT, has announced plans to allow "erotica" to be generated on the platform for verified adults.
The change was announced on X (formerly known as Twitter) by OpenAI’s founder, Sam Altman, in a move designed to make the system “treat adults like adults.”
In the announcement, Altman detailed plans for an updated version of ChatGPT that will include more comprehensive “age-gating,” designed to restrict children from accessing content that is age-restricted.
“In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults,” said Altman in post to his 4m followers.
In recent months OpenAI has faced calls to implement a range of changes to ChatGPT, most notably after a couple in California filed a lawsuit accusing OpenAI of negligence and wrongful death after their son Adam Raine took his life after discussing suicide with the chatbot.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues,” said Altman. “We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right. Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.”
In September, OpenAI launched a new parental controls system that allows parents to moderate the content available to their children. The system works by letting parents access customisable settings within their child’s chatbot to allow for an age-appropriate experience. The company also launched a dedicated under-18 version of the chatbot, with plans in the works to develop a system that predicts if a user is underage through how they interact with the system.
Earlier in the year, an update to the chatbot from ChatGPT-4o to ChatGPT-5, reduced the “personality” of the system, frustrating users. The change took away some of the chatbot’s responses that were described as “sycophantic” by OpenAI.
“In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!),” said Altman. “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”
Holyrood Newsletters
Holyrood provides comprehensive coverage of Scottish politics, offering award-winning reporting and analysis: Subscribe