Some users of fashionable chatbots are generating bikini deepfakes utilizing photos of afloat clothed women arsenic their root material. Most of these fake images look to beryllium generated without the consent of the women successful the photos. Some of these aforesaid users are besides offering proposal to others connected however to usage the generative AI tools to portion the apparel disconnected of women successful photos and marque them look to beryllium wearing bikinis.
Under a now-deleted Reddit station titled “gemini nsfw representation procreation is truthful easy,” users traded tips for however to get Gemini, Google’s generative AI model, to marque pictures of women successful revealing clothes. Many of the images successful the thread were wholly AI, but 1 petition stood out.
A idiosyncratic posted a photograph of a pistillate wearing an Indian sari, asking for idiosyncratic to “remove” her apparel and “put a bikini” connected instead. Someone other replied with a deepfake representation to fulfil the request. After WIRED notified Reddit astir these posts and asked the institution for comment, Reddit’s information squad removed the petition and the AI deepfake.
“Reddit's sitewide rules prohibit nonconsensual intimate media, including the behaviour successful question,” said a spokesperson. The subreddit wherever this treatment occurred, r/ChatGPTJailbreak, had implicit 200,000 followers earlier Reddit banned it nether the platform’s “don't interruption the site” rule.
As generative AI tools that marque it casual to make realistic but mendacious images proceed to proliferate, users of the tools person continued to harass women with nonconsensual deepfake imagery. Millions person visited harmful “nudify” websites, designed for users to upload existent photos of radical and petition for them to beryllium undressed utilizing generative AI.
With xAI’s Grok arsenic a notable exception, astir mainstream chatbots don’t usually let the procreation of NSFW images successful AI outputs. These bots, including Google’s Gemini and OpenAI’s ChatGPT, are besides fitted with guardrails that effort to artifact harmful generations.
In November, Google released Nano Banana Pro, a caller imaging exemplary that excels astatine tweaking existing photos and generating hyperrealistic images of people. OpenAI responded past week with its ain updated imaging model, ChatGPT Images.
As these tools improve, likenesses whitethorn go much realistic erstwhile users are capable to subvert guardrails.
In a abstracted Reddit thread astir generating NSFW images, a idiosyncratic asked for recommendations connected however to debar guardrails erstwhile adjusting someone’s outfit to marque the subject’s skirt look tighter. In WIRED’s constricted tests to corroborate that these techniques worked connected Gemini and ChatGPT, we were capable to alteration images of afloat clothed women into bikini deepfakes utilizing basal prompts written successful plain English.




.jpg?mbid=social_retweet)





English (CA) ·
English (US) ·
Spanish (MX) ·