Meta expands teen safety features to Facebook and Messenger

Meta is expanding its “Teen Accounts” safety system to Facebook and Messenger, following its earlier launch on Instagram last year.

Teen Accounts automatically apply stricter privacy and safety settings for users under 18, limiting exposure to harmful content. Young teens aged 13 to 15 will need parental permission to livestream or turn off image protections in direct messages.

Meta claims the system has improved online safety for teens. Since its launch, over 54 million users have been placed into Teen Accounts, with 97% of 13 to 15 year olds keeping the default protections in place.

However, critics argue the company has not provided evidence that Teen Accounts are truly effective. Andy Burrows of the Molly Rose Foundation says Meta has not explained which harmful content is being blocked. He also says parents still don’t know if harmful content is being recommended to children.

Matthew Sowemimo of the NSPCC believes that safety settings are not enough. He says dangerous content should be prevented from appearing in the first place, rather than being hidden afterwards.

Social media consultant Drew Benvie called the move a positive step but warned that teens might still find ways around restrictions.

Meta is also planning to use artificial intelligence in 2024 to detect users who may be lying about their age.

Teen Accounts are now rolling out in the UK, US, Australia and Canada. Under the UK’s Online Safety Act, social media companies are legally required to protect children from harmful or illegal content.

Experts say tech companies, not parents, must be responsible for ensuring children’s safety online.