AI & SecurityHIGH

AI Security - Incident Response Efforts to Surge by 2028

🎯

Basically, AI will cause many security problems that teams need to handle quickly by 2028.

Quick Summary

Gartner warns that by 2028, AI issues will dominate half of incident response efforts. Security teams must engage early to prevent costly incidents. The evolving landscape poses significant challenges for organizations.

What Happened

Gartner has issued a stark warning regarding the future of cybersecurity. By 2028, half of all incident response efforts in enterprises will be dedicated to managing issues arising from AI applications. As AI technology rapidly evolves, many organizations deploy custom-built AI solutions without fully testing them. This lack of preparation can lead to significant security vulnerabilities.

Gartner's VP analyst, Christopher Mixter, emphasized the complexity of these AI systems. They are dynamic and challenging to secure over time. Currently, most security teams lack established processes for addressing AI-related incidents, which can prolong resolution times and increase the effort required to manage these issues.

Who's Being Targeted

The implications of this trend affect a wide range of organizations, particularly those adopting AI technologies. As more businesses integrate AI into their operations, they become potential targets for security incidents related to these systems. Gartner predicts that within two years, half of organizations will utilize AI security platforms to safeguard their AI applications.

These platforms will help enforce acceptable use policies, monitor AI activity, and apply consistent security measures. The rise of AI-powered tools is crucial as they can mitigate risks related to prompt injection and data misuse, which are becoming increasingly common in AI applications.

Tactics & Techniques

Gartner's analysis highlights the need for security teams to adopt a 'shift left' approach. This means involving security professionals in AI project planning from the outset. By integrating security measures early in the development process, organizations can ensure that adequate controls are in place to prevent future incidents.

Additionally, the report underscores the growing importance of identity visibility and intelligence platforms. As machine identities outnumber human users significantly, organizations must improve their detection and remediation capabilities to manage both human and machine identities effectively.

Defensive Measures

To prepare for the challenges posed by AI technologies, organizations should take proactive steps. First, security teams should engage in AI projects early to establish security protocols. This will help in identifying potential vulnerabilities before they can be exploited.

Moreover, organizations must consider the implications of data sovereignty and cloud security. As geopolitical risks rise, nearly a third of organizations will demand comprehensive sovereignty over their cloud security controls. Implementing strong controls for data in transit and enhancing visibility into cryptographic processes will be essential in building trust and ensuring compliance.

In conclusion, as AI continues to evolve, organizations must adapt their security strategies to address the unique challenges it presents. By prioritizing early engagement and robust security measures, they can mitigate the risks associated with AI technologies.

🔒 Pro insight: Organizations must prioritize AI security integration early to avoid overwhelming incident response demands in the coming years.

Original article from

Infosecurity Magazine

Read Full Article

Related Pings

MEDIUMAI & Security

AI Security - Key Themes to Watch at RSAC 2026

RSAC 2026 is set to unveil crucial themes in cybersecurity, particularly around agentic AI. As organizations explore these advancements, understanding their implications is vital. Stay ahead of the curve by engaging with these emerging trends.

Arctic Wolf Blog·
MEDIUMAI & Security

AI Security - OpenAI Launches GPT-5.4 Mini and Nano Models

OpenAI has launched the GPT-5.4 mini and nano models, enhancing speed and efficiency for coding and data tasks. Developers can now leverage these advanced tools for better performance. This release signifies a major step in AI capabilities, making powerful tools more accessible and efficient.

Cyber Security News·
HIGHAI & Security

AI Security - Token Security Enhances Agent Protection

Token Security has launched a new intent-based security model for AI agents. This innovation helps organizations manage risks by aligning permissions with the agents' intended purposes. It's a crucial step in safeguarding enterprise environments as AI technology evolves.

Help Net Security·
MEDIUMAI & Security

AI Security - Polygraf AI Launches Real-Time Behavior Control

Polygraf AI has launched its Desktop Overlay for real-time compliance guidance. This innovative tool helps prevent sensitive data exposure, enhancing data protection in enterprise operations. With significant results in pilot tests, it’s a game-changer for organizations in regulated sectors.

Help Net Security·
MEDIUMAI & Security

AI Security - WorldCoin's New Identity Verification System

WorldCoin has launched AgentKit, linking AI agents to verified identities via iris scans. This aims to enhance trust and prevent misuse in AI interactions. With only 18 million users, the initiative seeks to make WorldCoin relevant again.

The Register Security·
HIGHAI & Security

AI Security - Menlo Delivers Unified Governance Platform

Menlo Security has launched a new Browser Security Platform to protect AI agents and humans in the workplace. This innovative solution addresses the security challenges posed by autonomous AI, ensuring safe operations. As AI integration grows, this platform is essential for maintaining security and governance in enterprises.

Help Net Security·