Less than a week after Meta was introduced AI generated stickers in its Facebook Messenger app, users are already abusing it to create offensive images and share the results on social media, reports VentureBeat. Specifically, an artist named Pier-Olivier Desbiens posted X’s series of viral virtual stickers on Tuesday, starting a string of problematic AI art visions shared by others.
“Found out that facebook messenger hasn’t created stickers yet and I don’t think anyone involved has thought anything through“Desbiens wrote in his post. “WWe are really living in the stupidest future imaginable,” he said full is a response.
Available to some users on a limited basis, the new version of AI stickers allows people to create simulated images of AI generated from descriptions based on text in both Facebook Messenger and Instagram Messenger. Stickers are then shared in chats, similar to emojis. Meta uses his new Emu image synthesis model to create them and has implemented filters to capture a variety of aggressive visions. But many mysterious combinations are slipping through the cracks.
The questionable vision shared on the X includes Mickey Mouse’s hold machine gun or a bloody knife, the flaming Twin Towers of the World Trade Center, the Pope with machine gunSesame Street’s Elmo brandishing a knifeDonald Trump like a crying babySimpsons characters skimpy underwearLuigi with a gun, Canadian Prime Minister Justin Trudeau flashing his back, and more.
This isn’t the first time AI-generated art has inspired threads full of giddy experiments trying to break through content filters on social media. Generations like these have been possible in unapproved open source image templates for over a year, but it is noteworthy that Meta released a template publicly that can create them without more strict protections in place through the version of built into flagship apps like Instagram and Messenger.
In particular, OpenAI’s DALL-E 3 has been put through similar methods recently, with people testing the AI image generator’s filter limits by creating feature images. real people or with moral content. It is difficult to capture all of the hurtful or offensive content across world cultures when an image creator can create almost any combination of objects, scenes, or people imaginable. It is yet another challenge facing moderate organizations in the future of both AI-powered applications and online spaces.

In the past year, it has become common for companies to beta-test AI systems generated by public access, which has brought us doozies like the Galactica Meta model in November. past and unreleased early version of the Bing Chat AI model. If past events are any indication, when an offensive article receives widespread attention, the producer reacts by either dropping it or taking the filters built into the thread. So will Meta pull the AI stickers feature or simply clamp down by adding more words and phrases to your Keyword filter?
When VentureBeat reporter Sharon Goldman asked Meta spokesperson Andy Stone about the stickers on Tuesday, he pointed to a blog post titled Building Generative AI Features Responsibly and said, “As with all AI-based systems, models can return incorrect or inappropriate results. We will continue to improve these features as they evolve and the more people share their results.”