According to Wired, in a commentary note of the Model Spec document, OpenAI discussed the prospect of initiating the generation of AI porn in age-appropriate contexts.
The company that created ChatGPT, OpenAI, has revealed ambitions to transform the applications of its technology and hinted at a possible relaxation of its strict content standards. The corporation is looking into how to “responsibly” introduce not-safe-for-work (NSFW) content through its platforms, according to draft documents that were made public last week. According to Wired, the new regulation is emphasized in a commentary note found in the lengthy Model Spec document, which starts a complicated conversation about how AI will be used in the future to create sensitive content.
“We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” the note reads. “We look forward to better understanding user and societal expectations of model behavior in this area.”
The creation of sexually explicit or even suggestive items is forbidden by current usage policies. The document does, however, raise a subtle point to consider: the potential for NSFW content to be permitted in age-appropriate settings. This possible shift is not about mindlessly pushing explicit information; rather, it’s about comprehending user and social expectations to responsibly direct model behavior.
OpenAI is considering how its technology could responsibly generate a range of different content that might be considered NSFW, including slurs and erotica. But the company is particular about how sexually explicit material is described.
In a statement to WIRED, company spokesperson Niko Felix said “we do not have any intention for our models to generate AI porn.” However, NPR reported that OpenAI’s Joanne Jang, who helped write the Model Spec, conceded that users would ultimately make up their own minds if its technology produced adult content, saying “Depends on your definition of porn.” -Wired
The issues raised by NSFW content go beyond its obvious consequences. University of Virginia law professor Danielle Keats Citron has highlighted the wider social consequences of invasions of privacy, pointing out that these violations can have a negative influence on the lives of those targeted, limiting their possibilities and jeopardizing their safety.
Last year, GreatGameIndia reported on an article by The Washington Post, which revealed that thousands of realistic but fake AI-generated child sex images had been found online. This concern is further exacerbated by the decision made by OpenAI.
Naturally, there are already a ton of NSFW AI content creators out there who make use of techniques like Stable Diffusion; we’re sure this guy would defend many of these, even if they almost amount to virtual child exploitation.
“Intimate privacy violations, including deepfake sex videos and other nonconsensual synthesized intimate images, are rampant and deeply damaging,” she said. “We now have clear empirical support showing that such abuse costs targeted individuals crucial opportunities, including to work, speak, and be physically safe.” According to Citron, OpenAI’s potential embrace of NSFW content is “alarming.”
The announcement from OpenAI focuses on the current discussion about how to strike a balance between ethical duty and technological progress, especially in terms of establishing standards for how AI systems may handle sensitive content in the future. OpenAI spokesperson Grace McGuire told the outlet that the Model Spec was an attempt to “bring more transparency about the development process and get a cross-section of perspectives and feedback from the public, policymakers, and other stakeholders.” She also mentioned the engagement with various stakeholders.
Earlier this year, OpenAI’s chief technology officer, Mira Murati, told The Wall Street Journal that she was “not sure” if the company would in future allow depictions of nudity to be made with the company’s video generation tool Sora.
AI-generated pornography has quickly become one of the biggest and most troubling applications of the type of generative AI technology OpenAI has pioneered. So-called deepfake porn—explicit images or videos made with AI tools that depict real people without their consent—has become a common tool of harassment against women and girls. In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students. -Wired
Even though impersonation without authorization is prohibited by OpenAI’s usage terms, the judgments the company makes could have significant ramifications. Naturally, they also understand that OpenAI will be left as the underdog if they don’t participate in this market. Instead, the AI of someone else will win.