Children in Britain are encountering violent online content, including material promoting self-harm, even as young as primary school age, according to recent research released on Friday. This alarming trend highlights the pressing need for global governments and tech giants like Meta (owner of Facebook, Instagram, and WhatsApp), Google (YouTube), Snap Inc. (Snapchat), and ByteDance (TikTok) to implement robust safeguarding measures, particularly for minors.
Last October, Britain enacted legislation imposing stricter regulations on social media platforms, mandating measures to prevent children from accessing harmful and age-inappropriate content through age limits and verification processes. However, the enforcement of penalties by Ofcom, the regulatory authority, awaits the development of codes of practice to implement these measures.
Messaging platforms, led by WhatsApp, have voiced opposition to certain provisions in the law, citing concerns about potentially compromising end-to-end encryption. The research, commissioned by Ofcom and conducted between May and November, surveyed 247 children aged between 8 and 17. It revealed that children primarily encountered violent content through social media, video-sharing platforms, and messaging apps.
Ofcom’s statement emphasized findings from the research, indicating that children often felt powerless over the content suggested to them and lacked a comprehensive understanding of recommender systems, commonly referred to as “the algorithm.” Ofcom’s Online Safety Group Director, Gill Whitehead, stressed the urgency for tech firms to take action to fulfill their child protection obligations under new online safety laws. Ofcom intends to consult on measures to ensure children have a safer online experience tailored to their age appropriateness.