
bryan cranston and sag-aftra say openai is OpenAI has taken significant steps to address concerns raised by actors and industry stakeholders regarding the use of deepfake technology in its Sora 2 application.
bryan cranston and sag-aftra say openai is
Background on Sora 2 and Deepfake Technology
Deepfake technology has rapidly evolved over the past few years, enabling the creation of highly realistic synthetic media. These advancements have sparked a myriad of ethical and legal discussions, particularly within the entertainment industry, where the likeness and voice of performers are often integral to their brand and livelihood. OpenAI’s Sora 2, released last month, allows users to generate AI-generated videos featuring various characters, including real-life personalities. However, the application faced immediate backlash from actors, studios, and unions over concerns about unauthorized use of their likenesses.
Deepfake technology operates by using artificial intelligence algorithms to analyze and replicate the features and mannerisms of individuals. While this technology has potential applications in various fields, including film and gaming, its misuse raises significant ethical questions. The ability to create convincing simulations of real people without their consent has led to fears of exploitation and misrepresentation, prompting calls for stricter regulations and protections for artists.
Concerns Raised by Industry Stakeholders
Following the launch of Sora 2, numerous actors and industry representatives voiced their apprehensions about the implications of the technology. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), along with individual actors, expressed concerns about the potential for misuse of their likenesses in AI-generated content. The union highlighted the risks of unauthorized reproductions that could undermine the rights and earnings of performers.
One notable figure in this discourse is actor Bryan Cranston, who became a focal point in the conversation after videos featuring his likeness appeared on Sora 2. In one instance, a video depicted him taking a selfie with the late pop icon Michael Jackson, raising questions about consent and the ethical use of AI-generated media.
Joint Statement and OpenAI’s Response
In response to the growing concerns, a joint statement was released involving Bryan Cranston, OpenAI, SAG-AFTRA, and several talent agencies, including the United Talent Agency, the Association of Talent Agents, and the Creative Artists Agency. The statement acknowledged the issues surrounding the deepfake videos and emphasized OpenAI’s commitment to addressing these concerns.
OpenAI expressed regret for the “unintentional generations” of content featuring Cranston and others, indicating that the company has “strengthened guardrails” around its opt-in policy for likeness and voice. This move aims to provide better protections for artists and performers who do not wish to have their likenesses used in AI-generated content.
Specifics of the Policy Changes
While the joint statement highlighted OpenAI’s commitment to improving its policies, it did not provide specific details on how the changes would be implemented. This lack of clarity has left some stakeholders seeking more information about the practical implications of the new measures. OpenAI did, however, reaffirm its dedication to ensuring that all artists, performers, and individuals retain the right to determine how and whether they can be simulated.
The company also stated that it would “expeditiously” review complaints regarding breaches of its policy, signaling a proactive approach to addressing any future issues that may arise. This commitment is crucial, as it indicates that OpenAI is willing to engage with stakeholders and adapt its technology in response to legitimate concerns.
Reactions from Bryan Cranston and SAG-AFTRA
Following the joint statement, Bryan Cranston expressed gratitude towards OpenAI for its efforts to improve its policies and enhance the protections surrounding the use of likenesses in AI-generated content. Cranston’s positive resolution serves as a testament to the potential for collaboration between technology companies and the entertainment industry in addressing ethical concerns.
However, SAG-AFTRA President Sean Astin emphasized that while the response from OpenAI is a step in the right direction, it is not sufficient on its own. He called for comprehensive legislation to protect performers from what he described as “massive misappropriation by replication technology.” Astin pointed to the proposed Nurture Originals, Foster Art, and Keep Entertainment Safe Act, or NO FAKES Act, as a necessary measure to safeguard the rights of artists in the age of AI.
The NO FAKES Act
The NO FAKES Act aims to establish clear guidelines and protections for artists regarding the use of their likenesses in AI-generated content. By advocating for this legislation, SAG-AFTRA seeks to ensure that performers have control over how their images and voices are used, preventing unauthorized reproductions that could harm their careers and reputations.
The proposed act reflects a growing recognition of the need for legal frameworks that address the unique challenges posed by emerging technologies. As deepfake technology continues to advance, the entertainment industry must adapt to protect its members and maintain ethical standards.
Implications for the Entertainment Industry
The ongoing dialogue between OpenAI, actors, and industry stakeholders highlights the broader implications of AI-generated content for the entertainment industry. As technology continues to evolve, the potential for misuse and ethical dilemmas will only increase. This situation underscores the importance of establishing clear guidelines and protections for artists, ensuring that their rights are upheld in the face of rapid technological advancements.
Moreover, the collaboration between OpenAI and industry representatives serves as a model for how technology companies can engage with stakeholders to address concerns and develop responsible practices. By prioritizing transparency and accountability, companies can foster trust and collaboration with the creative community, ultimately benefiting both parties.
Future Considerations
As the conversation surrounding deepfake technology and its implications continues, several key considerations emerge for the future of the entertainment industry:
- Legislation and Regulation: The introduction of laws like the NO FAKES Act could set a precedent for how the industry navigates the challenges posed by AI-generated content. Establishing clear legal frameworks will be essential in protecting artists’ rights.
- Technological Safeguards: Companies like OpenAI must continue to develop and implement robust safeguards to prevent unauthorized use of likenesses. This includes enhancing opt-in policies and ensuring that users are aware of the implications of using AI-generated content.
- Industry Collaboration: Ongoing dialogue between technology companies, unions, and individual artists will be crucial in shaping the future of AI in entertainment. By working together, stakeholders can create solutions that benefit all parties involved.
- Public Awareness: Educating the public about the ethical implications of deepfake technology is vital. As consumers become more aware of the potential for misuse, they can make informed choices about the content they engage with.
Conclusion
The recent developments surrounding OpenAI’s Sora 2 application underscore the importance of addressing ethical concerns related to deepfake technology. The joint statement from Bryan Cranston, OpenAI, and SAG-AFTRA reflects a commitment to improving protections for artists and performers, but it also highlights the need for comprehensive legislation to safeguard their rights. As the entertainment industry navigates this evolving landscape, collaboration and transparency will be essential in ensuring that technology serves to enhance, rather than undermine, the creative community.
Source: Original report
Was this helpful?
Last Modified: October 21, 2025 at 2:36 pm
2 views