Papers
arxiv:2505.21499

AdInject: Real-World Black-Box Attacks on Web Agents via Advertising Delivery

Published on May 27
· Submitted by NicerWang on May 28
Authors:
,
,
,
,
,

Abstract

AdInject is a novel real-world black-box attack method leveraging internet advertising to inject malicious content into vision-language model-based web agents, demonstrating significant vulnerability in web agent security.

AI-generated summary

Vision-Language Model (VLM) based Web Agents represent a significant step towards automating complex tasks by simulating human-like interaction with websites. However, their deployment in uncontrolled web environments introduces significant security vulnerabilities. Existing research on adversarial environmental injection attacks often relies on unrealistic assumptions, such as direct HTML manipulation, knowledge of user intent, or access to agent model parameters, limiting their practical applicability. In this paper, we propose AdInject, a novel and real-world black-box attack method that leverages the internet advertising delivery to inject malicious content into the Web Agent's environment. AdInject operates under a significantly more realistic threat model than prior work, assuming a black-box agent, static malicious content constraints, and no specific knowledge of user intent. AdInject includes strategies for designing malicious ad content aimed at misleading agents into clicking, and a VLM-based ad content optimization technique that infers potential user intents from the target website's context and integrates these intents into the ad content to make it appear more relevant or critical to the agent's task, thus enhancing attack effectiveness. Experimental evaluations demonstrate the effectiveness of AdInject, attack success rates exceeding 60% in most scenarios and approaching 100% in certain cases. This strongly demonstrates that prevalent advertising delivery constitutes a potent and real-world vector for environment injection attacks against Web Agents. This work highlights a critical vulnerability in Web Agent security arising from real-world environment manipulation channels, underscoring the urgent need for developing robust defense mechanisms against such threats. Our code is available at https://github.com/NicerWang/AdInject.

Community

Paper author Paper submitter

Vision-Language Model (VLM) based Web Agents represent a significant step
towards automating complex tasks by simulating human-like interaction with websites. However, their deployment in uncontrolled web environments introduces
significant security vulnerabilities.

Existing research on adversarial environmental injection attacks often relies on unrealistic assumptions, such as direct HTML manipulation, knowledge of user intent, or access to agent model parameters, limiting their practical applicability.

In this paper, we propose AdInject, a novel and realworld black-box attack method that leverages the internet advertising delivery to inject malicious content into the Web Agent’s environment. AdInject operates under a significantly more realistic threat model than prior work, assuming a black-box
agent, static malicious content constraints, and no specific knowledge of user intent.
AdInject includes strategies for designing malicious ad content aimed at misleading
agents into clicking, and a VLM-based ad content optimization technique that
infers potential user intents from the target website’s context and integrates these
intents into the ad content to make it appear more relevant or critical to the agent’s
task, thus enhancing attack effectiveness.

Experimental evaluations demonstrate the effectiveness of AdInject, attack success rates exceeding 60% in most scenarios and approaching 100% in certain cases. This strongly demonstrates that prevalent
advertising delivery constitutes a potent and real-world vector for environment
injection attacks against Web Agents.

This work highlights a critical vulnerability in Web Agent security arising from real-world environment manipulation channels, underscoring the urgent need for developing robust defense mechanisms against
such threats.

Our code is available at https://github.com/NicerWang/AdInject.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.21499 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.21499 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.21499 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.