DAM-3B / README.md
richardaecn's picture
Update README.md
a318719 verified
metadata
datasets:
  - nvidia/describe-anything-dataset
language:
  - en
base_model:
  - Efficient-Large-Model/VILA1.5-3b
pipeline_tag: image-text-to-text

Describe Anything

NVIDIA, UC Berkeley, UCSF

Long Lian, Yifan Ding, Yunhao Ge, Sifei Liu, Hanzi Mao, Boyi Li, Marco Pavone, Ming-Yu Liu, Trevor Darrell, Adam Yala, Yin Cui

[Paper] | [Code] | [Project Page] | [Video] | [HuggingFace Demo] | [Model/Benchmark/Datasets] | [Citation]

Model Card for DAM-3B

Description

Describe Anything Model 3B (DAM-3B) takes inputs of user-specified regions in the form of points/boxes/scribbles/masks within images, and generates detailed localized descriptions of images. DAM integrates full-image context with fine-grained local details using a novel focal prompt and a localized vision backbone enhanced with gated cross-attention. The model is for research and development only. This model is ready for non-commercial use.

License

NVIDIA Noncommercial License

Intended Usage

This model is intended to demonstrate and facilitate the understanding and usage of the describe anything models. It should primarily be used for research and non-commercial purposes.

Model Architecture

Architecture Type: Transformer
Network Architecture: ViT and Llama

This model was developed based on VILA-1.5.
This model has 3B of model parameters.

Input

Input Type(s): Image, Text, Binary Mask
Input Format(s): RGB Image, Binary Mask
Input Parameters: 2D Image, 2D Binary Mask
Other Properties Related to Input: 3 channels for RGB image, 1 channel for binary mask. Resolution is 384x384.

Output

Output Type(s): Text
Output Format: String
Output Parameters: 1D Text
Other Properties Related to Output: Detailed descriptions for the visual region.

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Hopper
  • NVIDIA Lovelace

Preferred/Supported Operating System(s):

  • Linux

Training Dataset

Describe Anything Training Datasets

Evaluation Dataset

We evaluate our models our detailed localized captioning benchmark: DLC-Bench

Inference

PyTorch

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Citation

If you use our work or our implementation in this repo, or find them helpful, please consider giving a citation.

@article{lian2025describe,
  title={Describe Anything: Detailed Localized Image and Video Captioning}, 
  author={Long Lian and Yifan Ding and Yunhao Ge and Sifei Liu and Hanzi Mao and Boyi Li and Marco Pavone and Ming-Yu Liu and Trevor Darrell and Adam Yala and Yin Cui},
  journal={arXiv preprint arXiv:2504.16072},
  year={2025}
}