WhatsApp fails to block child abuse imagery, group warns

The Internet Watch Foundation (IWF) has issued a stark warning that WhatsApp is failing to prevent the spread of child sexual abuse imagery on its platform. The group is calling on Meta, WhatsApp’s parent company, to take stronger measures to protect children.

The call follows a high-profile incident involving disgraced BBC broadcaster Huw Edwards, where illegal content was shared. The IWF says Meta is “choosing not to” implement mechanisms that could prevent such material from spreading.

WhatsApp, however, insists its current safety features are robust. A spokesperson emphasized that users can report abusive material. This material is then flagged to the National Centre for Missing and Exploited Children. They also pointed out that the app’s end-to-end encryption ensures user privacy.

However, Dan Sexton, IWF’s chief technology officer, criticized Meta’s approach, asking, “What is stopping those images being shared again? Right now, there is nothing.”

The National Crime Agency’s Rick Jones echoed these concerns, stating that while technology exists to detect such images, many platforms are designed to block its use. He argued that encryption cannot protect customers because companies “simply cannot see illegal behavior.”

Safeguarding Minister Jess Phillips insisted that social media platforms must act. She emphasized that UK law is clear on the illegality of child sexual abuse imagery. She urged companies to implement robust detection measures.

The debate over encryption remains unresolved, with some supporting it for protecting privacy, while others call for technology to scan for harmful content.