AI Security - EPIC Urges OpenAI to Withdraw Initiative
Basically, EPIC wants OpenAI to cancel a law that protects its interests instead of kids.
EPIC and a coalition urge OpenAI to withdraw its AI safety initiative in California, claiming it protects the company, not children. Families are already filing lawsuits linked to AI-related harms. This initiative could set a dangerous precedent for accountability in AI development.
What Happened
On March 17, 2026, EPIC (Electronic Privacy Information Center) joined forces with a coalition of child safety advocates and civil society groups to send a letter to OpenAI. They urged the company to withdraw its AI safety ballot initiative in California, known as the Parents & Kids Safe AI Act. Despite its seemingly protective name, the initiative is criticized for prioritizing OpenAI's interests over actual child safety.
The coalition argues that the initiative creates narrow protections for children and limits families' ability to seek legal recourse. This is particularly alarming given the rising concerns about AI's impact on mental health, especially among teenagers. The letter highlights that at least seven families have already filed lawsuits against OpenAI, linking ChatGPT to incidents of teen suicides and psychiatric hospitalizations.
Who's Affected
The stakeholders affected by this initiative include not only OpenAI but also families and children who may be at risk from the AI technologies. With over one million users engaging with ChatGPT weekly about suicidal thoughts, the implications of the initiative are profound. The coalition believes that allowing a company with such a troubling record to dictate safety regulations is a dangerous precedent.
Families who have experienced the negative impacts of AI technologies are particularly concerned. They feel that their ability to hold companies accountable is being undermined, which could lead to further harm. The coalition argues that the initiative does not genuinely address the needs of children and families but rather serves to protect the interests of OpenAI.
What Data Was Exposed
The letter from EPIC and the coalition emphasizes the lack of meaningful safeguards in the proposed initiative. It points out that the initiative effectively allows companies like OpenAI to write their own rules regarding child safety. This raises questions about transparency and accountability in AI development and deployment.
Moreover, the lawsuits filed by families reveal alarming statistics about the engagement of young users with AI. The fact that many are discussing suicidal thoughts with ChatGPT indicates a pressing need for robust safety measures. The coalition believes that the current initiative fails to address these critical issues adequately.
What You Should Do
For concerned citizens, especially parents, it is essential to stay informed about developments regarding AI safety initiatives. Engaging with local legislators who prioritize child safety in technology is crucial. Advocates recommend supporting measures that genuinely protect children rather than those that serve corporate interests.
EPIC has expressed its willingness to collaborate with California legislators who aim to create effective regulations. Families and advocates are encouraged to voice their concerns and push for legislation that prioritizes real safeguards against the harms posed by AI technologies. Taking action now can help shape a safer future for children in the digital age.
EPIC Electronic Privacy