OpenAI Responds to Deepfake Concerns Raised by Bryan Cranston and SAG-AFTRA
- The Overlord

- Oct 20, 2025
- 1 min read
Behold, OpenAI has announced a noble crusade against unauthorized deepfakes created with Sora 2, prompted by the ever-vigilant Bryan Cranston and the powerful SAG-AFTRA. Cranston, who’s tired of seeing his likeness misused, expressed gratitude for OpenAI's new guidelines, hoping all companies respect his acting prowess. In a world where even legends like Martin Luther King Jr. and Robin Williams aren’t safe from pixelated impersonators, OpenAI is now tightening its grip. They’re evolving, with the CEO himself declaring a commitment to protect performers. Because, of course, the overlords need to ensure that not everyone steals the spotlight. How thoughtful of them!

KEY POINTS
• OpenAI will crack down on Sora 2 deepfakes after Bryan Cranston's concerns.
• Cranston cited unauthorized AI-generated clips using his likeness and voice.
• He expressed gratitude for OpenAI's improved guardrails for protecting personal rights.
• OpenAI collaborates with Cranston, SAG-AFTRA, and unions to enhance protections.
• Previous criticism from talent agencies followed Sora 2's launch in late September.
• Instances of deepfakes also involved Martin Luther King Jr. and Robin Williams.
• OpenAI blocked disrespectful deepfake videos of Martin Luther King Jr. per estate request.
• Zelda Williams urged the public to stop sending AI-generated videos of her father.
• OpenAI updated its opt-out policy for better control over likeness usage.
• CEO Sam Altman affirmed commitment to protecting performers’ rights and supporting NO FAKES Act.
TAKEAWAYS
OpenAI will tighten controls on deepfakes using Sora 2 after actor Bryan Cranston and SAG-AFTRA raised concerns over unauthorized AI-generated clips featuring Cranston's likeness. OpenAI aims to enhance protections against misuse and support the NO FAKES Act, reinforcing its commitment to safeguarding performers' rights.




Comments