New font-rendering trick hides malicious commands from AI tools

New font-rendering trick hides malicious commands from AI tools

A new font-rendering attack causes AI assistants to miss malicious commands shown on webpages by hiding them in seemingly harmless HTML.

The technique relies on social engineering to persuade users to run a malicious command displayed on a webpage, while keeping it encoded in the underlying HTML so AI assistants cannot analyze it.

Researchers at browser-based security company LayerX devised a proof-of-concept (PoC) that uses custom fonts that remap characters via glyph substitution, and CSS that conceals the benign text via small font size or specific color selection, while displaying the payload clearly on the webpage.

Read more: bleepingcomputer.com