Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment
Abstract
A method named FOA-Attack is proposed to enhance adversarial transferability in multimodal large language models by optimizing both global and local feature alignments using cosine similarity and optimal transport.
Multimodal large language models (MLLMs) remain vulnerable to transferable adversarial examples. While existing methods typically achieve targeted attacks by aligning global features-such as CLIP's [CLS] token-between adversarial and target samples, they often overlook the rich local information encoded in patch tokens. This leads to suboptimal alignment and limited transferability, particularly for closed-source models. To address this limitation, we propose a targeted transferable adversarial attack method based on feature optimal alignment, called FOA-Attack, to improve adversarial transfer capability. Specifically, at the global level, we introduce a global feature loss based on cosine similarity to align the coarse-grained features of adversarial samples with those of target samples. At the local level, given the rich local representations within Transformers, we leverage clustering techniques to extract compact local patterns to alleviate redundant local features. We then formulate local feature alignment between adversarial and target samples as an optimal transport (OT) problem and propose a local clustering optimal transport loss to refine fine-grained feature alignment. Additionally, we propose a dynamic ensemble model weighting strategy to adaptively balance the influence of multiple models during adversarial example generation, thereby further improving transferability. Extensive experiments across various models demonstrate the superiority of the proposed method, outperforming state-of-the-art methods, especially in transferring to closed-source MLLMs. The code is released at https://github.com/jiaxiaojunQAQ/FOA-Attack.
Community
In this work, we propose FOA-Attack, a targeted transferable adversarial attack designed to improve transferability on multimodal large language models (MLLMs). Motivated by the limitations of current methods that rely only on global feature alignment (e.g., CLIP’s [CLS] token), we identified that ignoring local patch features leads to suboptimal transfer, especially for closed-source models.
To address this, we propose a dual-level feature alignment strategy:
- Global level: a cosine similarity-based global feature loss to align coarse representations.
- Local level: a local clustering optimal transport loss that refines fine-grained alignment by leveraging local token clustering and optimal transport.
We further propose a dynamic ensemble model weighting strategy to adaptively improve transferability. Extensive experiments show FOA-Attack significantly outperforms state-of-the-art methods, particularly on closed-source MLLMs.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- One Surrogate to Fool Them All: Universal, Transferable, and Targeted Adversarial Attacks with CLIP (2025)
- Transferable Adversarial Attacks on Black-Box Vision-Language Models (2025)
- AdPO: Enhancing the Adversarial Robustness of Large Vision-Language Models with Preference Optimization (2025)
- X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP (2025)
- TRAIL: Transferable Robust Adversarial Images via Latent diffusion (2025)
- Unleashing the Power of Pre-trained Encoders for Universal Adversarial Attack Detection (2025)
- Rethinking Target Label Conditioning in Adversarial Attacks: A 2D Tensor-Guided Generative Approach (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper