Papers
arxiv:2505.21494

Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment

Published on May 27
· Submitted by jiaxiaojunQAQ on May 28
Authors:
,
,
,
,
,
,
,
,

Abstract

A method named FOA-Attack is proposed to enhance adversarial transferability in multimodal large language models by optimizing both global and local feature alignments using cosine similarity and optimal transport.

AI-generated summary

Multimodal large language models (MLLMs) remain vulnerable to transferable adversarial examples. While existing methods typically achieve targeted attacks by aligning global features-such as CLIP's [CLS] token-between adversarial and target samples, they often overlook the rich local information encoded in patch tokens. This leads to suboptimal alignment and limited transferability, particularly for closed-source models. To address this limitation, we propose a targeted transferable adversarial attack method based on feature optimal alignment, called FOA-Attack, to improve adversarial transfer capability. Specifically, at the global level, we introduce a global feature loss based on cosine similarity to align the coarse-grained features of adversarial samples with those of target samples. At the local level, given the rich local representations within Transformers, we leverage clustering techniques to extract compact local patterns to alleviate redundant local features. We then formulate local feature alignment between adversarial and target samples as an optimal transport (OT) problem and propose a local clustering optimal transport loss to refine fine-grained feature alignment. Additionally, we propose a dynamic ensemble model weighting strategy to adaptively balance the influence of multiple models during adversarial example generation, thereby further improving transferability. Extensive experiments across various models demonstrate the superiority of the proposed method, outperforming state-of-the-art methods, especially in transferring to closed-source MLLMs. The code is released at https://github.com/jiaxiaojunQAQ/FOA-Attack.

Community

Paper author Paper submitter

In this work, we propose FOA-Attack, a targeted transferable adversarial attack designed to improve transferability on multimodal large language models (MLLMs). Motivated by the limitations of current methods that rely only on global feature alignment (e.g., CLIP’s [CLS] token), we identified that ignoring local patch features leads to suboptimal transfer, especially for closed-source models.

To address this, we propose a dual-level feature alignment strategy:

  • Global level: a cosine similarity-based global feature loss to align coarse representations.
  • Local level: a local clustering optimal transport loss that refines fine-grained alignment by leveraging local token clustering and optimal transport.

We further propose a dynamic ensemble model weighting strategy to adaptively improve transferability. Extensive experiments show FOA-Attack significantly outperforms state-of-the-art methods, particularly on closed-source MLLMs.

Code: https://github.com/jiaxiaojunQAQ/FOA-Attack

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.21494 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.21494 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.21494 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.