Papers
arxiv:2504.12080

DC-SAM: In-Context Segment Anything in Images and Videos via Dual Consistency

Published on Apr 16
· Submitted by zaplm on Apr 28
Authors:
,
,
,
,
,

Abstract

Given a single labeled example, in-context segmentation aims to segment corresponding objects. This setting, known as one-shot segmentation in few-shot learning, explores the segmentation model's generalization ability and has been applied to various vision tasks, including scene understanding and image/video editing. While recent Segment Anything Models have achieved state-of-the-art results in interactive segmentation, these approaches are not directly applicable to in-context segmentation. In this work, we propose the Dual Consistency SAM (DC-SAM) method based on prompt-tuning to adapt SAM and SAM2 for in-context segmentation of both images and videos. Our key insights are to enhance the features of the SAM's prompt encoder in segmentation by providing high-quality visual prompts. When generating a mask prior, we fuse the SAM features to better align the prompt encoder. Then, we design a cycle-consistent cross-attention on fused features and initial visual prompts. Next, a dual-branch design is provided by using the discriminative positive and negative prompts in the prompt encoder. Furthermore, we design a simple mask-tube training strategy to adopt our proposed dual consistency method into the mask tube. Although the proposed DC-SAM is primarily designed for images, it can be seamlessly extended to the video domain with the support of SAM2. Given the absence of in-context segmentation in the video domain, we manually curate and construct the first benchmark from existing video segmentation datasets, named In-Context Video Object Segmentation (IC-VOS), to better assess the in-context capability of the model. Extensive experiments demonstrate that our method achieves 55.5 (+1.4) mIoU on COCO-20i, 73.0 (+1.1) mIoU on PASCAL-5i, and a J&F score of 71.52 on the proposed IC-VOS benchmark. Our source code and benchmark are available at https://github.com/zaplm/DC-SAM.

Community

Paper author Paper submitter
edited about 9 hours ago

The main contributions of this work are:

  • We propose a novel prompt-consistency method based on SAM, called Dual-Consistency SAM (DC-SAM), tailored for one-shot segmentation tasks. It exploits the positive and negative features of the visual prompts, leading to high-quality prompts for in-context segmentation. Furthermore, this design can be easily extended to video tasks by combining the SAM and a new mask tube design.
  • We introduce a novel cyclic consistent cross-attention mechanism that ensures the final generated prompts better focus on the key regions requiring prompting. When combined with SAM, this mechanism effectively filters out potentially ambiguous components in the features, further enhancing the accuracy and specificity of in-context segmentation.
  • We collect a new video in-context segmentation benchmark, IC-VOS (In-Context Video Object Segmentation), featuring manually curated examples sourced from existing video benchmarks. In addition, we benchmark several representative works in IC-VOS.
  • With extensive experiments and ablation studies, the proposed method achieves state-of-the-art performance on various datasets and our newly proposed in-context segmentation benchmarks. DC-SAM achieves 55.5 (+1.4) mIoU on COCO-20i, 73.0 (+1.1) mIoU on PASCAL-5i, and a J&F score of 71.52 on the IC-VOS benchmark.
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.12080 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.12080 in a Space README.md to link it from this page.

Collections including this paper 1