Meta will roll out a new system on Instagram to notify parents when teenagers repeatedly search for suicide or self-harm content. The alerts will trigger after multiple searches within a short period. Meta integrates the feature into its Teen Account supervision tools. The company says the change strengthens protections for young users online.
Previously, Instagram blocked harmful search terms and redirected teens to external support services. Meta now adds direct notifications to parents to give families more oversight. Teen Accounts in the UK, US, Australia, and Canada will start receiving alerts next week. Meta plans to expand the system worldwide over the coming months.
Molly Rose Foundation Expresses Concern
The Molly Rose Foundation has criticized the alert system. Chief executive Andy Burrows says the measure could have unintended consequences. He argues that automatic notifications may create panic rather than provide help.
The foundation was established by the family of Molly Russell, who died by suicide in 2017 at age 14 after viewing self-harm and suicide content online, including on Instagram. Burrows says parents want to know when their child struggles. However, he warns that abrupt alerts could leave families distressed and unprepared for sensitive conversations.
Meta says it will attach expert resources to each alert. The company says these tools will guide parents through difficult discussions. Ian Russell, who chairs the foundation, questions whether that guidance will be effective. He says a parent receiving such a message at work could feel overwhelmed. Written guidance alone may not prevent panic in the moment.
Experts Call for Stronger Protections
Charities argue that the alert system highlights deeper issues on the platform. Ged Flynn, chief executive of Papyrus Prevention of Young Suicide, welcomes the alerts but says stronger preventive measures are needed. He says young people continue to encounter dangerous content online.
Flynn explains that parents contact his organization daily, worried about their children’s exposure. Families want platforms to prevent harmful content from appearing at all, not just alert them afterward.
Leanda Barrington-Leach, executive director of 5Rights Foundation, urges Meta to redesign its systems with child safety as the default. Burrows also cites research from his foundation showing Instagram continues to recommend harmful content about depression, self-harm, and suicide to vulnerable teens.
He stresses that companies must address systemic risks instead of shifting responsibility to parents. Meta disputes the foundation’s findings published last September, saying the report misrepresents its safety and parental support efforts.
Pressure Mounts on Social Media Platforms
Instagram designed the Teen Account alerts to detect sudden changes in search behavior. Meta says the system builds on existing safety tools. The platform already hides self-harm and suicide content and blocks related searches.
Parents will receive notifications via email, text, WhatsApp, or directly within the app. Meta chooses the method based on the contact details families provide. The company acknowledges the system may occasionally trigger alerts unnecessarily. It says it prefers caution when protecting young users.
Sameer Hinduja, co-director of the Cyberbullying Research Center, says alerts will naturally alarm parents. He stresses that practical guidance must follow each notification. Companies cannot leave families alone with fear, and Hinduja believes Meta understands this responsibility.
Instagram also plans to extend alerts to interactions with its AI chatbot. The company notes many teens increasingly turn to artificial intelligence tools for support. Governments worldwide continue pressuring social media companies to improve child safety.
Australia has banned social media for children under 16. Spain, France, and the UK are considering similar rules. Regulators closely monitor how technology companies interact with young users. Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri recently appeared in a US court defending the company against allegations it targeted underage users.
