AI Security - OpenAI Expands Bug Bounty for Safety Risks
Basically, OpenAI is asking experts to help find problems with its AI that could be dangerous.
OpenAI has launched a new Safety Bug Bounty program to address AI abuse and safety risks. This initiative invites researchers to report vulnerabilities that traditional security measures may overlook. It's a significant step towards enhancing AI safety and protecting users from potential harm.
What Happened
On March 26, 2026, OpenAI announced the launch of its new Safety Bug Bounty program. This initiative aims to engage researchers in identifying and reporting potential AI abuse and safety risks across its products. The program is hosted on Bugcrowd, a platform that connects organizations with ethical hackers. This new bounty program complements OpenAI's existing Security Bug Bounty, which has already rewarded researchers for identifying 409 security vulnerabilities since its inception in April 2023.
The Safety Bug Bounty focuses on issues that pose meaningful abuse and safety risks, even if they do not meet the criteria for traditional security vulnerabilities. Scenarios covered include agentic risks, violations of account integrity, and abuse of proprietary information. This shift signifies OpenAI's commitment to addressing the broader implications of AI technology, beyond just conventional security flaws.
Who's Affected
The new program is particularly relevant for researchers and ethical hackers interested in AI safety. By expanding the scope of its bounty program, OpenAI encourages a wider range of submissions, thus fostering a collaborative environment aimed at improving AI safety. This initiative also impacts users of OpenAI's products, as it aims to mitigate risks associated with AI misuse.
OpenAI has clarified that certain types of vulnerabilities, such as general content-policy bypasses without clear safety implications, are not eligible for rewards. However, researchers who identify flaws that can lead to direct user harm may still qualify for rewards, depending on the situation. This nuanced approach helps ensure that the focus remains on significant safety risks.
What Data Was Exposed
While the Safety Bug Bounty does not specifically target data exposure, it does address scenarios where proprietary information may be at risk. For instance, issues related to data exfiltration or unauthorized access to sensitive information can be reported under this program. OpenAI is particularly concerned with vulnerabilities that could lead to harmful actions or the misuse of its AI models.
The program also emphasizes the importance of maintaining account and platform integrity. Researchers are encouraged to report any violations that allow users to bypass restrictions or manipulate trust signals. By addressing these issues, OpenAI aims to enhance the overall safety and reliability of its AI systems.
What You Should Do
If you are a researcher interested in participating in the Safety Bug Bounty, you can submit your findings via Bugcrowd. OpenAI's team will triage submissions and may reroute them between the Safety and Security Bug Bounty programs based on their scope. This ensures that all relevant issues are addressed appropriately.
For users of OpenAI's products, it's essential to stay informed about the potential risks associated with AI technologies. Engaging with the community and understanding the implications of AI misuse can help foster a safer digital environment. As AI technology continues to evolve, initiatives like the Safety Bug Bounty will play a crucial role in ensuring its responsible use.
Infosecurity Magazine