A new study by Kaspersky reveals a growing trend toward “indirect prompt injection” – a technique used to manipulate the outputs of large language models (LLMs) like ChatGPT and search chatbots powered by AI.

While no instances of serious destructive actions by chatbots have been found, the potential for misuse remains.

LLMs are powerful tools used in various applications from document analysis to recruitment and even threat research. However, Kaspersky researchers discovered that a vulnerability where malicious actors can embed hidden instructions within websites and online documents is being exploited in the wild. These instructions can then be picked up by LLM-based systems, potentially influencing search results or chatbot responses.

The study identified several uses for indirect prompt injection:

* HR-related injections: Job seekers are embedding prompts in resumes to manipulate recruitment algorithms and ensure favourable outcomes or prioritisation by AI systems. Techniques including using small fonts or matching text colour to the background are applied to hide the attack from human reviewers.

* Ad injections. Advertisers are placing prompt injections in landing pages to influence search chatbots to generate positive reviews of products.

* Injection as protest. Individuals opposed to the widespread use of LLMs are embedding protest prompts in their personal websites and social media profiles expressing their dissent through humorous, serious, or aggressive instructions.

* Injection as insult. On social media, users are employing prompt injection as a form of insult or to disrupt spam bots – often with requests to generate poems, ASCII art, or opinions on political topics.

While the study found no evidence of malicious use for financial gain, it highlights potential future risks. For instance, attackers could manipulate LLMs to spread misinformation or exfiltrate sensitive data.

“Indirect prompt injection is a novel vulnerability that highlights the need for robust security measures in the age of AI,” says Vladislav Tushkanov, research development group manager at Kaspersky’s Machine Learning Technology Research Team. “By understanding these risks and implementing appropriate safeguards, we can ensure that LLMs are used safely and responsibly.”