AI & SecurityHIGH

AI Security - Researchers Expose Font Trick for Malicious Commands

🎯

Basically, researchers found a way to hide bad commands from AI assistants using special fonts.

Quick Summary

Researchers have found a way to trick AI assistants into missing malicious commands. This vulnerability poses risks for users relying on AI for security checks. Major platforms have been alerted but responses have been inadequate. Stay vigilant and verify commands before execution.

What Happened

Researchers have discovered a proof-of-concept (PoC) that exploits custom fonts to deceive popular AI assistants like ChatGPT and Google’s Gemini. This method allows malicious users to hide dangerous instructions within web pages, making them invisible to AI while still visible to human users. Imagine reading a book where some words are written in invisible ink; humans can see the hidden message, while AI can only see the visible text.

The implications are alarming. If a user asks an AI assistant if a command on a suspicious webpage is safe, the AI may respond positively, unaware of the hidden malicious instructions. This could lead to the execution of harmful commands that compromise the user's device, all while the AI remains oblivious to the danger.

Who's Affected

This vulnerability affects users of popular AI assistants that rely on web content for decision-making. With the increasing integration of AI in everyday tasks, many individuals may unwittingly become victims of this attack. The major AI platforms, including Microsoft and Google, have been informed of this vulnerability. However, their responses have been disappointing, with many providers dismissing the report as outside their security scope.

Only Microsoft and Google showed initial interest in addressing the issue, but even Google later downplayed the severity, closing the report without action. This leaves users exposed to potential attacks that exploit this vulnerability.

Tactics & Techniques

The attack combines custom fonts with Cascading Style Sheets (CSS) to create a deceptive display. While users see harmless text, the AI's reading of the underlying HTML reveals a different, more dangerous message. This tactic relies heavily on social engineering, as attackers count on users to trust their AI assistants without verifying the commands themselves.

To illustrate, a malicious webpage might instruct a user to execute a command that could lead to a system compromise. If the AI assistant checks the page, it may only see the harmless version and mislead the user into thinking it's safe. This highlights the critical need for users to remain vigilant and not solely rely on AI for security assessments.

How to Protect Yourself

To safeguard against this type of attack, users should take proactive measures. Here are some essential tips:

  • Copy and paste the exact command you plan to run, rather than relying on the AI's interpretation of a webpage.
  • Be cautious with any site that prompts you to execute commands, especially in terminal environments.
  • Trust your instincts; if something feels off, it’s better to pause and reassess.

Additionally, tools like the Malwarebytes Browser Guard can help by warning users if a website attempts to manipulate clipboard content. Keeping an up-to-date anti-malware solution will also provide protection against known malicious sites. If in doubt, consult tools like Malwarebytes Scam Guard to verify the legitimacy of suspicious websites.

🔒 Pro insight: This exploit underscores the limitations of AI in interpreting web content, necessitating enhanced security measures in AI design.

Original article from

Malwarebytes Labs

Read Full Article

Related Pings

HIGHAI & Security

AI Security - Strengthening Observability for Risk Detection

Microsoft emphasizes the need for observability in AI systems to detect risks effectively. Organizations using AI must adapt to ensure security and compliance. Enhanced visibility helps prevent data breaches and operational failures.

Microsoft Security Blog·
MEDIUMAI & Security

AI Security - Key Themes to Watch at RSAC 2026

RSAC 2026 is set to unveil crucial themes in cybersecurity, particularly around agentic AI. As organizations explore these advancements, understanding their implications is vital. Stay ahead of the curve by engaging with these emerging trends.

Arctic Wolf Blog·
MEDIUMAI & Security

AI Security - OpenAI Launches GPT-5.4 Mini and Nano Models

OpenAI has launched the GPT-5.4 mini and nano models, enhancing speed and efficiency for coding and data tasks. Developers can now leverage these advanced tools for better performance. This release signifies a major step in AI capabilities, making powerful tools more accessible and efficient.

Cyber Security News·
HIGHAI & Security

AI Security - Token Security Enhances Agent Protection

Token Security has launched a new intent-based security model for AI agents. This innovation helps organizations manage risks by aligning permissions with the agents' intended purposes. It's a crucial step in safeguarding enterprise environments as AI technology evolves.

Help Net Security·
MEDIUMAI & Security

AI Security - Polygraf AI Launches Real-Time Behavior Control

Polygraf AI has launched its Desktop Overlay for real-time compliance guidance. This innovative tool helps prevent sensitive data exposure, enhancing data protection in enterprise operations. With significant results in pilot tests, it’s a game-changer for organizations in regulated sectors.

Help Net Security·
MEDIUMAI & Security

AI Security - WorldCoin's New Identity Verification System

WorldCoin has launched AgentKit, linking AI agents to verified identities via iris scans. This aims to enhance trust and prevent misuse in AI interactions. With only 18 million users, the initiative seeks to make WorldCoin relevant again.

The Register Security·