Meta Platforms (META) has introduced parental controls for children and teenagers interacting with artificial intelligence (A.I.) chatbots.
The new parental controls go into effect in 2026 and include the ability to turn off one-on-one chats with A.I. characters.
However, parents won't be able to turn off Meta's A.I. assistant, which Meta says, “will remain available to offer helpful information and educational opportunities.”
Parents will be able to block specific chatbots. And Meta said that parents will be able to get “insights” about what their kids are chatting about with various A.I. chatbots and characters.
The changes come as the social media giant faces criticism over harms to children from its social media platforms and A.I. products.
A.I. chatbots are drawing scrutiny over their interactions with children and teens that lawsuits claim have driven some to commit suicide.
A recent study from Common Sense Media found that 70% of teenagers are now regularly using A.I. chatbots and digital companions.
Meta Platforms has also announced that teen accounts on Instagram will now be restricted to seeing PG-13 content and won’t be able to change their settings without a parent’s permission.
This means that kids using teen-specific accounts will see photos and videos on Instagram that are similar to what they would see in a PG-13 movie — no sex, drugs, or dangerous stunts.
Meta said the PG-13 restrictions will also apply to it’s A.I. chats and searches for children and teens.
Meta Platforms’ stock has gained 19% this year and is trading at $712.07 U.S. per share.