Sora: OpenAI’s New AI Video Generator Faces Deepfake Controversy and New Safety Measures
Summary
-
Sora, OpenAI’s new video generator, can turn text or images into hyper-realistic clips, but it faces criticism for enabling deepfakes and misuse of personal likenesses.
-
Users have reported losing control of their digital appearances, raising ethical and authenticity concerns.
-
OpenAI has responded by adding new safety tools, including cameo-use restrictions and improved watermark visibility.
OpenAI’s latest innovation, the Sora app, has drawn global attention for both its groundbreaking realism and its ethical implications. Initially launched in the United States and Canada, Sora quickly topped app store charts by allowing anyone to generate lifelike videos from simple text prompts or still images — complete with motion, sound, and cinematic or animated styles.
Despite its success, critics argue that Sora makes it easier than ever to create AI-generated deepfakes, since it allows users to insert other people’s faces and voices into synthetic videos. Some users who volunteered their likenesses as “cameos” later discovered they had little or no control over how their digital representations were being used.
Why Has Sora Sparked Controversy?
The app’s original goal was straightforward: empower anyone to turn ideas into realistic video content. However, problems surfaced almost immediately when approved cameos appeared in politically charged or misleading contexts, contradicting the personal beliefs of those featured.
Although each video created with Sora includes a moving watermark identifying its origin, some users have already found methods to remove it, heightening concerns about authenticity, misinformation, and digital manipulation. The lack of robust transparency and user control has intensified the debate around the ethical use of AI-generated media.
What Are the New Safety Measures?
In response to growing criticism, OpenAI announced the rollout of new control features for Sora. According to Bill Peebles, head of the Sora project, users can now apply specific restrictions governing how their cameos are used.
The company also plans to make the system more resilient by introducing detailed permissions, giving users greater say over when and where their likeness appears. In addition, OpenAI has pledged to enhance watermark visibility and make it significantly harder to remove — though the company has not yet disclosed the technical details of these upcoming protections.
Tags: Artificial Intelligence (AI), OpenAI, Sora, Deepfakes, AI Ethics
What did you think of this news? Leave a comment below or share it on your social media. This helps more people stay informed about the latest updates in technology, science, innovation, and gaming!
This news was originally published in:
Original source
