AI videos expose children on TikTok

Thousands of artificial intelligence videos showing sexualised minors are spreading widely on TikTok, a new report warns.

The investigation highlights serious failures in online safety systems, despite strict rules banning harmful content involving children.

Millions reached worldwide

The Spanish fact-checking group Maldita identified more than 20 TikTok accounts sharing AI-generated videos of young girls.

These videos show minors in bikinis, school uniforms, or suggestive poses, often designed to attract sexual attention.

Together, the accounts published over 5,200 videos, gaining nearly six million likes and more than 550,000 followers.

Links to criminal networks

The report found comment sections directing viewers to Telegram groups selling child sexual abuse material.

Maldita reported 12 of these Telegram groups to Spanish police for further investigation.

Profit through subscriptions

Some TikTok accounts also made money by selling AI-generated images and videos through TikTok’s subscription service.

TikTok takes around half of the profits from these subscriptions, according to its creator agreements.

Missing AI labels

TikTok requires creators to label AI-generated content, but most analysed videos showed no watermark or identification.

Only some videos displayed the “TikTok AI Alive” label used for internal image-to-video tools.

Pressure on governments

The findings come as countries including Australia, Denmark, and the European Union debate stricter rules for young users.

These measures aim to reduce online harm and protect children from exploitation.

Platforms respond

Telegram and TikTok say they are fully committed to fighting child sexual abuse material.

Telegram claims it removed more than 909,000 related groups and channels in 2025.

TikTok says it automatically removes most harmful content and reported millions of videos and accounts removed this year.