Meta Expands Teen Safety and Child Protection Features Across Instagram and Facebook

Meta has removed over 635,000 accounts from Instagram and Facebook as part of its ongoing commitment to safeguarding children and teens from online exploitation. This enforcement action includes the takedown of 135,000 Instagram accounts for posting sexualized comments or soliciting explicit content from adult-managed accounts featuring children under the age of 13. Additionally, approximately 500,000 connected accounts were also removed for their involvement in related harmful behavior.

In conjunction with this large-scale enforcement, Meta has introduced a new set of safety features designed to create a more secure environment for younger users and accounts that showcase minors. These updates include enhanced protections in direct messaging (DMs), strengthened nudity filters, and expanded safety settings for adult-managed child accounts.

Enhanced Direct Messaging Safety for Teens
Meta is rolling out new safety tools in Direct Messages (DMs) for Teen Accounts. Teens will now see contextual account information — including when an account was created — along with easy-to-access safety tips, and a new combined “block and report” function at the top of chat screens. These updates are designed to help teens quickly identify and act on suspicious or unwanted contact.

In June alone, teens used the Safety Notices feature to block 1 million accounts and report another 1 million. Teens also engaged with the platform’s Location Notice—designed to warn when they’re chatting with someone possibly located in a different country—over 1 million times, with 10% choosing to learn more about how to protect themselves.

Global Impact of Nudity Protection
Meta’s nudity protection feature, which is enabled by default for teens, has been widely adopted, with 99% of users keeping the setting turned on. The tool helps blur unsolicited nude images and reduces the likelihood of such content being shared. In May, users chose not to forward blurred content nearly 45% of the time after seeing Meta’s nudity warning.

Meta announces new safety features and removes over 635,000 harmful accounts to enhance teen and child protection on Instagram and Facebook

Extending Protections to Adult-Managed Child Accounts
Recognizing the growing number of adult-managed accounts featuring children, Meta is extending many of its Teen Account protections to these profiles. These accounts—often managed by parents, guardians, or talent representatives—will now be placed under stricter message controls and offensive comment filters like “Hidden Words.”

Additionally, Meta will take steps to prevent potentially suspicious adults—such as those previously blocked by teens—from discovering or interacting with these accounts via recommendations, search, or comments.

Meta announces new safety features and removes over 635,000 harmful accounts to enhance teen and child protection on Instagram and Facebook

Aggressive Enforcement Against Harmful Behavior
Meta’s specialist teams are also intensifying enforcement efforts. Earlier this year, they removed over 135,000 Instagram accounts for leaving inappropriate comments or requesting explicit images from adult-managed child accounts. An additional 500,000 linked accounts across Instagram and Facebook were also taken down.

To further combat cross-platform exploitation, Meta has shared relevant account data with industry peers through the Tech Coalition’s Lantern program, supporting broader efforts to identify and eliminate online predators.

A Continued Commitment to Online Safety
Meta’s latest updates underscore its ongoing mission to protect young people online—whether through proactive tools, age-appropriate design, or industry collaboration. These changes are part of a broader strategy to create a safer, more respectful digital environment for children and teens across the globe.

Read more: V Spark Communications Wins PR and Influencer Mandate for UBON

Author Profile

About News Bureau

View all posts by News Bureau