AI Security - Key Lessons from Evo's Design Partner Program

Basically, Snyk learned important lessons about securing AI from its customers.
Snyk's Evo design partner program reveals five crucial lessons for AI security. Discover how visibility and risk intelligence are shaping governance in generative AI.
What Happened
In 2025, Snyk launched the Evo design partner program to tackle the challenges of securing generative AI. With a focus on customer needs, they collaborated with over 5,000 clients to identify key areas for improvement in AI security. This initiative led to the development of Evo, a groundbreaking orchestrator designed to enhance visibility, governance, and risk management in AI applications.
The program highlighted five essential lessons aimed at addressing the complexities of AI sprawl and ensuring robust security measures. Each lesson reflects the real-world experiences of organizations navigating the evolving landscape of AI technologies.
Key Lessons Learned
-
Visibility is Crucial: Many organizations underestimate the extent of shadow AI within their systems. Snyk discovered that effective visibility is essential for identifying hidden AI models and services. Their Evo AI-SPM’s Discovery Agent has proven invaluable, enabling teams to uncover thousands of AI assets quickly, leading to a more comprehensive understanding of their AI landscape.
-
Tailored Discovery for Custom AI: As companies increasingly adopt custom AI solutions, standard detection methods often fall short. Snyk's design partners emphasized the need for tailored discovery tools that can recognize unique implementations. This insight led to the creation of Custom Discovery, which learns from a customer's codebase to identify specific patterns, enhancing detection accuracy.
-
Scalable Governance Policies: The challenge of managing diverse AI models with varying risk profiles prompted the need for scalable governance solutions. Snyk introduced out-of-the-box policies that automatically evaluate AI models against critical security risks. This shift allows organizations to prioritize risks effectively and maintain consistent oversight across their AI assets.
-
Risk Intelligence for Informed Decision-Making: Understanding the risks associated with AI models is vital for effective governance. The introduction of the Risk Intelligence Agent has enabled teams to assess vulnerabilities in AI systems systematically. This tool translates raw data into actionable insights, allowing organizations to build informed policies and respond proactively to potential threats.
-
Operational Security for AI Systems: As AI technologies evolve, operational security must extend to encompass all components, including agents and model control planes. Snyk's design partners highlighted the need for centralized control mechanisms to manage AI assets effectively. The Policy Agent plays a crucial role in enforcing security measures and ensuring compliance within CI/CD pipelines.
What This Means for the Future
The insights gained from the Evo design partner program underscore the importance of continuous discovery, real-time risk intelligence, and enforceable policies in AI security. As organizations strive to innovate with AI, they must also prioritize robust governance frameworks that adapt to the rapid changes in technology. The journey to secure generative AI is ongoing, and collaboration with customers will be key to navigating this complex landscape effectively.