Watchdog Group Public Citizen Demands OpenAI Withdraw AI Video App Sora Over Deepfake Dangers

OpenAI has faced backlash from public figures and has made some changes, but critics say these responses are insufficient and reactive.

Watchdog Group Public Citizen Demands OpenAI Withdraw AI Video App Sora Over Deepfake Dangers
Photo of OpenAI's logo from Dec. 2023.

WASHINGTON, Nov. 11, 2025 (AP) - The tech industry is moving fast and breaking things again — and this time it is humanity’s shared reality and control of our likeness before and after death — thanks to artificial intelligence image-generation platforms like OpenAI’s Sora 2.

The typical Sora video, made on OpenAI’s app and spread onto TikTok, Instagram, X and Facebook, is designed to be amusing enough for you to click and share. It could be Queen Elizabeth II rapping or something more ordinary and believable. One popular Sora genre is fake doorbell camera footage capturing something slightly uncanny – say, a boa constrictor on the porch or an alligator approaching an unfazed child – and ends with a mild shock, like a grandma shouting as she beats the animal with a broom.

But a growing chorus of advocacy groups, academics and experts are raising alarms about the dangers of letting people create AI videos on just about anything they can type into a prompt, leading to the proliferation of nonconsensual images and realistic deepfakes in a sea of less harmful “AI slop.” OpenAI has cracked down on AI creations of public figures — among them, Michael Jackson, Martin Luther King Jr. and Mister Rogers — doing outlandish things, but only after an outcry from family estates and an actors' union.

Member discussion

Popular Tags