
ny could force tiktok youtube and instagram A proposed New York law could mandate social media platforms to implement stringent age verification measures to protect minors from potentially harmful content.
ny could force tiktok youtube and instagram
Overview of the SAFE For Kids Act
On Monday, New York Attorney General Letitia James unveiled proposed rules for the Stop Addictive Feeds Exploitation (SAFE) For Kids Act. This initiative aims to ensure that social media platforms confirm users are over 18 before granting them access to algorithm-driven feeds or nighttime notifications. The SAFE For Kids Act was signed into law by New York Governor Kathy Hochul last year as part of a broader effort to safeguard the mental health of children in the digital age.
The legislation is part of a growing trend across the United States, where lawmakers are increasingly focused on online child safety. However, many of these initiatives have encountered legal challenges and raised concerns over user privacy. The implications of these proposed rules extend beyond New York, as they could set a precedent for similar legislation nationwide.
Legal Context and Implications
The legal landscape surrounding online age verification is evolving. Recent Supreme Court rulings have allowed for age-gating on adult sites, which may pave the way for similar requirements on social media platforms. This shift could significantly impact how companies approach user verification and content moderation.
In addition to New York, other states are also considering or have already enacted age verification laws. For instance, California is on the verge of passing legislation that would require device manufacturers and app stores to implement age verification measures. Meanwhile, South Dakota and Wyoming have already mandated that platforms enforce age verification if they host sexual content. These developments indicate a growing consensus among lawmakers about the need for stricter controls on online content aimed at minors.
Proposed Rules Under the SAFE For Kids Act
The proposed rules under the SAFE For Kids Act stipulate that social media platforms must restrict unverified users or minors under 18 to chronological feeds or posts from accounts they follow. Additionally, platforms would be required to disable notifications from midnight to 6 AM, a measure aimed at reducing late-night engagement with potentially harmful content. The Attorney General’s office is currently seeking public comments on how to define a nighttime notification, highlighting the complexities involved in implementing these rules.
To verify a user’s age, companies can employ various methods, provided they are effective and safeguard user data. Importantly, platforms must offer at least one alternative to uploading a government-issued ID. For example, a face scan that estimates a user’s age could serve as an acceptable verification method. This flexibility aims to balance the need for security with user privacy concerns.
Parental Involvement in Age Verification
Under the proposed rules, minors seeking access to a platform’s “addictive” algorithmic feeds would need parental permission, which would involve a similar verification process. This requirement underscores the role of parents in monitoring their children’s online activities and ensuring they are not exposed to inappropriate content. The proposed rules mandate that platforms delete any identifying information about the user or parent immediately after verification, further emphasizing the importance of data privacy.
Defining “Addictive” Feeds
The SAFE For Kids Act applies specifically to companies that feature user-generated content and have users who spend at least 20 percent of their time engaging with the platform’s addictive feeds. The 2023 version of the bill defines an “addictive” feed as one that generates content based on user data or device information. This definition could potentially encompass major platforms such as Instagram, TikTok, and YouTube, which rely heavily on algorithm-driven content to engage users.
Companies that fail to comply with the law could face substantial penalties, including fines of up to $5,000 per violation, in addition to other possible remedies. The financial implications of non-compliance could motivate platforms to prioritize age verification measures, but it also raises questions about the feasibility of implementing such systems effectively.
Public Comment Period and Future Steps
The proposed rules are currently in a public comment phase that will last for 60 days. Following this period, the Office of the Attorney General will have one year to finalize the rules. Once finalized, the law will take effect 180 days later. However, the timeline for implementation may be subject to delays due to potential legal challenges.
Critics of the SAFE For Kids Act have already voiced their concerns. NetChoice, a trade association representing major tech companies, has characterized the SAFE Act as an “assault on free speech.” They argue that the legislation could inadvertently restrict access to content for adults, raising First Amendment issues. Similarly, the Electronic Frontier Foundation has expressed concerns that the law may block adults from accessing content they are legally entitled to view.
Stakeholder Reactions
The reactions to the proposed rules have been mixed, reflecting the complexities of balancing child safety with individual rights. Proponents of the SAFE For Kids Act argue that the legislation is a necessary step to protect minors from the risks associated with social media use, including exposure to harmful content and addictive behaviors. They emphasize the importance of creating a safer online environment for children, particularly given the increasing prevalence of mental health issues among young people.
On the other hand, opponents contend that the proposed age verification measures could lead to unintended consequences. Critics argue that the requirement for age verification could create barriers for legitimate users, particularly adults who may be unfairly affected by the restrictions. Additionally, concerns about data privacy and the potential for misuse of personal information have been raised, highlighting the need for careful consideration of how these measures are implemented.
Broader Implications for Social Media Platforms
The SAFE For Kids Act and similar legislation across the country could have far-reaching implications for social media platforms. As companies grapple with the requirements of age verification, they may need to invest significantly in technology and infrastructure to comply with the new regulations. This could lead to increased operational costs, which may ultimately be passed on to users in the form of subscription fees or other charges.
Moreover, the implementation of age verification measures could alter the dynamics of user engagement on these platforms. By restricting access to algorithm-driven feeds for unverified users, companies may inadvertently reduce overall user engagement, which is often driven by personalized content. This could have a cascading effect on advertising revenue, as advertisers typically rely on user engagement metrics to gauge the effectiveness of their campaigns.
Conclusion
The proposed rules under New York’s SAFE For Kids Act represent a significant step toward enhancing online safety for minors. However, the complexities surrounding age verification, user privacy, and free speech raise important questions about the future of social media regulation. As the public comment period unfolds and stakeholders weigh in, the outcome of this initiative could set a precedent for similar laws across the United States.
Source: Original report
Was this helpful?
Last Modified: September 15, 2025 at 9:37 pm
4 views

