AI & SecurityHIGH

AI Security - EPIC Urges OpenAI to Withdraw Initiative

EPEPIC Electronic Privacy
OpenAIEPICAI safetyParents & Kids Safe AI Actchild safety
🎯

Basically, EPIC wants OpenAI to cancel a law that protects its interests instead of kids.

Quick Summary

EPIC and a coalition urge OpenAI to withdraw its AI safety initiative in California, claiming it protects the company, not children. Families are already filing lawsuits linked to AI-related harms. This initiative could set a dangerous precedent for accountability in AI development.

What Happened

On March 17, 2026, EPIC (Electronic Privacy Information Center) joined forces with a coalition of child safety advocates and civil society groups to send a letter to OpenAI. They urged the company to withdraw its AI safety ballot initiative in California, known as the Parents & Kids Safe AI Act. Despite its seemingly protective name, the initiative is criticized for prioritizing OpenAI's interests over actual child safety.

The coalition argues that the initiative creates narrow protections for children and limits families' ability to seek legal recourse. This is particularly alarming given the rising concerns about AI's impact on mental health, especially among teenagers. The letter highlights that at least seven families have already filed lawsuits against OpenAI, linking ChatGPT to incidents of teen suicides and psychiatric hospitalizations.

Who's Affected

The stakeholders affected by this initiative include not only OpenAI but also families and children who may be at risk from the AI technologies. With over one million users engaging with ChatGPT weekly about suicidal thoughts, the implications of the initiative are profound. The coalition believes that allowing a company with such a troubling record to dictate safety regulations is a dangerous precedent.

Families who have experienced the negative impacts of AI technologies are particularly concerned. They feel that their ability to hold companies accountable is being undermined, which could lead to further harm. The coalition argues that the initiative does not genuinely address the needs of children and families but rather serves to protect the interests of OpenAI.

What Data Was Exposed

The letter from EPIC and the coalition emphasizes the lack of meaningful safeguards in the proposed initiative. It points out that the initiative effectively allows companies like OpenAI to write their own rules regarding child safety. This raises questions about transparency and accountability in AI development and deployment.

Moreover, the lawsuits filed by families reveal alarming statistics about the engagement of young users with AI. The fact that many are discussing suicidal thoughts with ChatGPT indicates a pressing need for robust safety measures. The coalition believes that the current initiative fails to address these critical issues adequately.

What You Should Do

For concerned citizens, especially parents, it is essential to stay informed about developments regarding AI safety initiatives. Engaging with local legislators who prioritize child safety in technology is crucial. Advocates recommend supporting measures that genuinely protect children rather than those that serve corporate interests.

EPIC has expressed its willingness to collaborate with California legislators who aim to create effective regulations. Families and advocates are encouraged to voice their concerns and push for legislation that prioritizes real safeguards against the harms posed by AI technologies. Taking action now can help shape a safer future for children in the digital age.

🔒 Pro insight: The pushback against OpenAI's initiative highlights growing concerns over AI's accountability, especially regarding mental health impacts on youth.

Original article from

EPIC Electronic Privacy · Caroline Anders

Read Full Article

Related Pings

HIGHAI & Security

AI Security - White House Framework Favors Corporations Over People

The White House's new AI framework favors corporate interests over public safety. This raises serious concerns about privacy and the risks of AI technology. Citizens are urged to advocate for stronger protections.

EPIC Electronic Privacy·
MEDIUMAI & Security

AI Security Operations - Vendors Promise Future Not Yet Realized

AI SOC vendors are making bold promises about autonomous operations, but real-world usage tells a different story. Many organizations are hesitant to trust these tools. Understanding this gap is crucial for effective security operations.

Help Net Security·
MEDIUMAI & Security

AI Security - Achieving Agentic Outcomes in Cybersecurity

Tom Tovar discusses the shift towards agentic AI models in cybersecurity. Organizations are adapting to improve their defenses against evolving threats. This change is crucial for staying relevant in a rapidly advancing tech landscape.

SC Media·
MEDIUMAI & Security

AI Security - Understanding Your AI Agents Explained

Okta's Matt Immler discusses the importance of knowing your AI agents. Organizations must ensure visibility and control to protect sensitive data. This is essential for security and innovation.

SC Media·
HIGHAI & Security

AI Security - Undetectable LLM Backdoor Attack Explained

A new method called ProAttack can stealthily compromise AI models using just a few poisoned samples. This poses a serious risk for organizations relying on LLMs. Current defenses are inadequate, highlighting the urgent need for improved security measures.

Help Net Security·
HIGHAI & Security

AI Security - Who Owns Access to AI Agents?

AI agents are widely used in enterprises, but many organizations struggle with access management. Fragmented ownership leads to security risks and potential data exposure. It's crucial for companies to clarify responsibilities and improve their access controls.

Help Net Security·