
bryan cranston and sag-aftra say openai is OpenAI has responded to concerns from actors and industry stakeholders regarding the use of deepfake technology in its Sora 2 application, particularly following the unauthorized use of actor Bryan Cranston’s likeness.
bryan cranston and sag-aftra say openai is
Background on Deepfake Technology and Sora 2
Deepfake technology, which utilizes artificial intelligence to create hyper-realistic videos, has raised significant ethical and legal questions since its inception. The technology allows for the manipulation of video content to the extent that individuals can appear to say or do things they never actually did. This capability has led to concerns about privacy, consent, and the potential for misuse in various contexts, including misinformation and defamation.
OpenAI’s Sora 2, released last month, is an AI-driven platform that generates videos featuring lifelike representations of individuals. The application initially implemented an opt-out policy for copyright holders, allowing them to remove their likenesses from the platform. However, this approach faced backlash from actors, studios, and talent agencies, who argued that it did not adequately protect their rights and interests.
Concerns Raised by Actors and Industry Stakeholders
Since the launch of Sora 2, various stakeholders, including actors, studios, agents, and the actors’ union SAG-AFTRA, have voiced their concerns regarding the implications of deepfake technology. The primary issues revolve around the unauthorized use of an individual’s likeness and voice, which can lead to significant reputational damage and financial loss.
In particular, Bryan Cranston’s likeness was used in a video generated by Sora 2, which depicted him taking a selfie with the late pop icon Michael Jackson. This incident highlighted the potential for misuse of deepfake technology, prompting Cranston and others to call for stronger protections for performers.
Joint Statement from Bryan Cranston and OpenAI
In response to the outcry, a joint statement was issued by Bryan Cranston, OpenAI, SAG-AFTRA, and several talent agencies, including the United Talent Agency, the Association of Talent Agents, and the Creative Artists Agency. The statement acknowledged the concerns raised and emphasized that OpenAI has “strengthened guardrails” around its opt-in policy for likeness and voice.
The statement also conveyed OpenAI’s regret for the “unintentional generations” that occurred, indicating a recognition of the need for more robust safeguards. However, the company did not provide specific details on how it plans to modify the application or address the concerns raised by the industry.
OpenAI’s Commitment to Protecting Artists
OpenAI has reiterated its commitment to ensuring that all artists, performers, and individuals retain the right to determine how their likenesses and voices can be used. The company stated that it would “expeditiously” review complaints regarding breaches of its policy, signaling a proactive approach to addressing potential violations.
This commitment is crucial in an industry where the unauthorized use of an individual’s likeness can lead to significant ethical and legal ramifications. The implications of deepfake technology extend beyond mere representation; they touch on issues of consent, ownership, and the potential for exploitation.
Reactions from Bryan Cranston and SAG-AFTRA
Following the joint statement, Bryan Cranston expressed gratitude to OpenAI for its willingness to improve its policies and guardrails. He noted that while his case had a positive resolution, the broader issue of deepfake technology remains a concern for many performers.
SAG-AFTRA president Sean Astin echoed this sentiment, emphasizing the need for legislative protections against “massive misappropriation by replication technology.” Astin pointed to the proposed Nurture Originals, Foster Art, and Keep Entertainment Safe Act, or NO FAKES Act, as a necessary step toward safeguarding the rights of performers in the age of AI-generated content.
The NO FAKES Act: A Legislative Response
The NO FAKES Act aims to establish legal frameworks that protect artists from unauthorized use of their likenesses and voices in AI-generated content. The proposed legislation seeks to address the gaps in existing copyright laws, which may not adequately cover the complexities introduced by deepfake technology.
Key provisions of the NO FAKES Act include:
- Opt-In Requirements: Artists would have the right to opt-in before their likenesses or voices can be used in AI-generated content.
- Granular Control: The legislation would provide artists with more granular control over how their likenesses are used, allowing for specific permissions and limitations.
- Legal Recourse: The act would establish legal avenues for artists to seek recourse in cases of unauthorized use, ensuring that they can protect their rights and interests.
By implementing such measures, the NO FAKES Act aims to create a safer environment for artists, enabling them to engage with emerging technologies without fear of exploitation or misrepresentation.
Industry Implications and Future Considerations
The developments surrounding OpenAI’s Sora 2 and the concerns raised by actors and industry stakeholders underscore the need for ongoing dialogue about the ethical implications of deepfake technology. As AI-generated content becomes increasingly prevalent, the entertainment industry must grapple with the challenges posed by this technology.
Key considerations for the future include:
- Ethical Standards: Establishing ethical standards for the use of deepfake technology in entertainment and media will be crucial in maintaining trust between creators and audiences.
- Technological Accountability: Companies developing AI-driven applications must be held accountable for the implications of their technologies, ensuring that they prioritize the rights and interests of individuals.
- Public Awareness: Increasing public awareness about the capabilities and limitations of deepfake technology can help mitigate the risks associated with its misuse.
As the industry navigates these challenges, collaboration between technology companies, artists, and lawmakers will be essential in creating a framework that balances innovation with the protection of individual rights.
Conclusion
The recent developments involving OpenAI, Bryan Cranston, and SAG-AFTRA highlight the pressing need for robust protections in the face of advancing deepfake technology. While OpenAI’s commitment to strengthening its policies is a positive step, the broader conversation about ethical standards and legislative measures remains critical. The proposed NO FAKES Act represents a potential pathway toward safeguarding artists’ rights in an increasingly digital landscape, ensuring that they can navigate the complexities of AI-generated content without compromising their integrity or autonomy.
Source: Original report
Was this helpful?
Last Modified: October 21, 2025 at 1:38 pm
0 views