Watchdog warns over deepfake threat

A leading US watchdog is urging OpenAI to withdraw its new video tool after rising fears about deepfake harms.

Public Citizen says the Sora 2 app lets users create videos that look real and spread fast across social platforms.

The group argues the technology risks drowning the public in fabricated scenes that could distort politics and damage trust.

It warns that millions could be misled by clips that appear authentic but are crafted from simple typed prompts.

Campaigners say the app can produce anything from playful celebrity parodies to realistic scenes that can target ordinary people.

They note that unsettling fake home-camera clips are gaining traction online and make viewers question everyday events.

Public Citizen claims the tool also fuels non-consensual uses of people’s likenesses, hitting vulnerable groups the hardest.

Its letter to OpenAI and Congress accuses the firm of releasing Sora 2 before vital safety testing was finished.

The group says this pattern mirrors earlier disputes over AI images of well-known cultural figures and global icons.

It fears that political narratives could be shaped by convincing first-seen videos that anchor false memories in voters.

OpenAI has tightened rules on depicting famous people after pressure from unions, estates, and entertainment groups worldwide.

But critics say the company acts only when powerful voices protest, leaving ordinary users exposed to damaging misuse.

Researchers warn that even banned material can slip through, including troubling fetish content that targets women online.

Public Citizen argues the company should slow development, improve safeguards, and stop releasing products before they are thoroughly tested.