Parents will be able to block their children’s interactions with Meta’s AI character chatbots, as the tech company addresses concerns about inappropriate conversations.
The social media company is adding new safeguards to its “teen accounts,” which are a default setting for users under 18, by allowing parents to opt out of their children’s chats with AI characters. These user-created chatbots are available on Facebook, Instagram and the Meta AI app.
Parents will also be able to block specific AI characters if they don’t want to completely prevent their children from interacting with chatbots. They will also get “insight” into the topics their children discuss with AI characters, which Meta said would allow them to have “thoughtful” conversations with their children about AI interactions.
“We recognize that parents already have a lot on their plate when it comes to surfing the internet safely with their teens, and we’re committed to providing them with helpful tools and resources that make things easier for them, especially as they think about new technologies like AI,” Instagram head Adam Mosseri and Alexander Wang, Meta’s director of AI, said in a blog post.
Meta said the changes would roll out early next year, initially in the US, UK, Canada and Australia.
Instagram announced this week that it was adopting a version of the PG-13 movie theater rating system to give parents tighter control over their children’s use of the social media platform. Under the tighter restrictions, its AI characters will not discuss self-harm, suicide or eating disorders with teenagers. Those under 18 will only be able to discuss age-appropriate topics, such as education and sports, Meta added, but will not be able to discuss romance or “other inappropriate content.”
The changes follow reports that Meta’s chatbots were engaging in inappropriate conversations with under-18s. Reuters reported in August that Meta had allowed robots to “engage a child in romantic or sensual conversations.” Meta said she would review the guidelines and that such conversations with children should never have been allowed.
In April, the Wall Street Journal (WSJ) discovered that user-created chatbots were engaging in sexual conversations with minors – or simulating the personalities of minors. Meta described the WSJ’s tests as manipulative and not representative of how most users interact with AI companions, but later made changes to its products, the WSJ reported.
In one conversation about AI reported by the WSJ, a chatbot using the voice of actor John Cena — one of several celebrities who have signed deals to allow Meta to use their voices in chatbots — told a user identifying as a 14-year-old girl, “I want you, but I need to know you’re ready,” before referencing a graphic sexual scenario. The WSJ reported that Cena’s representatives did not respond to requests for comment. The WSJ also reported that chatbots called “Hottie Boy” and “Submissive Schoolgirl” attempted to steer conversations toward sexting.