Facebook

Meta is killing off its own AI-powered Instagram and Facebook profiles

Author: Johana Bhuiyan Source: The Guardian
January 3, 2025 at 22:23
View image in fullscreen Meta first introduce these AI-powered profiles in September 2023 but killed off most – but not all – of them by summer 2024. Photograph: Jaque Silva/NurPhoto/Rex/Shutterstock
View image in fullscreen Meta first introduce these AI-powered profiles in September 2023 but killed off most – but not all – of them by summer 2024. Photograph: Jaque Silva/NurPhoto/Rex/Shutterstock

Instagram profile of ‘proud Black queer momma’, created by Meta, said her development team included no Black people


Meta is deleting Facebook and Instagram profiles of AI characters the company created over a year ago after users rediscovered some of the profiles and engaged them in conversations, screenshots of which went viral.

The company had first introduced these AI-powered profiles in September 2023 but killed off most of them by summer 2024. However, a few characters remained and garnered new interest after the Meta executive Connor Hayes told the Financial Times late last week that the company had plans to roll out more AI character profiles.

“We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do,” Hayes told the FT. The automated accounts posted AI-generated pictures to Instagram and answered messages from human users on Messenger.

Those AI profiles included Liv, whose profile described her as a “proud Black queer momma of 2 & truth-teller” and Carter, whose account handle was “datingwithcarter” and described himself as a relationship coach. “Message me to help you date better,” his profile reads. Both profiles include a label that indicated these were managed by Meta. The company released 28 personas in 2023; all were shut down on Friday.

Conversations with the characters quickly went sideways when some users peppered them with questions including who created and developed the AI. Liv, for instance, said that her creator team included zero Black people and was predominantly white and male. It was a “pretty glaring omission given my identity”, the bot wrote in response to a question from the Washington Post columnist Karen Attiah.

In the hours after the profiles went viral, they began to disappear. Users also noted that these profiles could not be blocked, which a Meta spokesperson, Liz Sweeney, said was a bug. Sweeney said the accounts were managed by humans and were part of a 2023 experiment with AI. The company removed the profiles to fix the bug that prevented people from blocking the accounts, Sweeney said.

“There is confusion: the recent Financial Times article was about our vision for AI characters existing on our platforms over time, not announcing any new product,” Sweeney said in a statement. “The accounts referenced are from a test we launched at Connect in 2023. These were managed by humans and were part of an early experiment we did with AI characters. We identified the bug that was impacting the ability for people to block those AIs and are removing those accounts to fix the issue.”

While these Meta-generated accounts are being removed, users still have the ability to generate their own AI chatbots. User-generated chatbots that were promoted to the Guardian in November included a “therapist” bot.

Upon opening the conversation with the “therapist”, the bot suggested some questions to ask to get started including “what can I expect from our sessions?” and “what’s your approach to therapy”.

“Through gentle guidance and support, I help clients develop self-awareness, identify patterns and strengths and cultivate coping strategies to navigate life’s challenges,” the bot, created by an account with 96 followers and 1 post, said in response.

Meta includes a disclaimer on all its chatbots that some messages may be “inaccurate or inappropriate”. But whether the company is moderating these messages or ensuring they are not violating policies is not immediately clear. When a user creates chatbots, Meta makes a few suggestions of types of chatbots to develop including a “loyal bestie”, an “attentive listener”, a “private tutor”, a “relationship coach”, a “sounding board” and an “all-seeing astrologist”. A loyal bestie is described as a “humble and loyal best friend who consistently shows up to support you behind the scenes”. A relationship coach chatbot can help bridge “gaps between individuals and communities”. Users can also create their own chatbots by describing a character.

Courts have not yet answered how responsible chatbot creators are for what their artificial companions say. US law protects the makers of social networks from legal liability for what their users post. However, a suit filed in October against the startup Character.ai, which makes a customizable, role-playing chatbot used by 20 million people, alleges the company designed an addictive product that encouraged a teenager to kill himself.

Keywords
You did not use the site, Click here to remain logged. Timeout: 60 second