Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Model Card for DAM-3B
|
2 |
+
|
3 |
+
## Description
|
4 |
+
Describe Anything Model 3B (DAM-3B) takes inputs of user-specified regions in the form of points/boxes/scribbles/masks within images, and generates detailed localized descriptions of images. DAM integrates full-image context with fine-grained local details using a novel focal prompt and a localized vision backbone enhanced with gated cross-attention. The model is for research and development only. This model is ready for non-commercial use.
|
5 |
+
|
6 |
+
## License
|
7 |
+
[NVIDIA Noncommercial License](https://huggingface.co/nvidia/DAM-3B/blob/main/LICENSE)
|
8 |
+
|
9 |
+
## Intended Usage
|
10 |
+
This model is intended to demonstrate and facilitate the understanding and usage of the describe anything models. It should primarily be used for research and non-commercial purposes.
|
11 |
+
|
12 |
+
## Model Architecture
|
13 |
+
**Architecture Type:** Transformer <br>
|
14 |
+
**Network Architecture:** ViT and Llama <br>
|
15 |
+
|
16 |
+
This model was developed based on [VILA-1.5](https://github.com/NVlabs/VILA). <br>
|
17 |
+
This model has 3B of model parameters. <br>
|
18 |
+
|
19 |
+
## Input
|
20 |
+
**Input Type(s):** Image, Text, Binary Mask <br>
|
21 |
+
**Input Format(s):** RGB Image, Binary Mask <br>
|
22 |
+
**Input Parameters:** 2D Image, 2D Binary Mask <br>
|
23 |
+
**Other Properties Related to Input:** 3 channels for RGB image, 1 channel for binary mask. Resolution is 384x384. <br>
|
24 |
+
|
25 |
+
## Output
|
26 |
+
**Output Type(s):** Text <br>
|
27 |
+
**Output Format:** String <br>
|
28 |
+
**Output Parameters:** 1D Text <br>
|
29 |
+
**Other Properties Related to Output:** Detailed descriptions for the visual region. <br>
|
30 |
+
|
31 |
+
**Supported Hardware Microarchitecture Compatibility:** <br>
|
32 |
+
* NVIDIA Ampere
|
33 |
+
* NVIDIA Hopper
|
34 |
+
* NVIDIA Lovelace
|
35 |
+
|
36 |
+
**Preferred/Supported Operating System(s):** <br>
|
37 |
+
* Linux
|
38 |
+
|
39 |
+
## Training Dataset
|
40 |
+
[Describe Anything Training Datasets](https://huggingface.co/datasets/nvidia/describe-anything-dataset)
|
41 |
+
|
42 |
+
## Evaluation Dataset
|
43 |
+
We evaluate our models our detailed localized captioning benchmark: [DLC-Bench](https://huggingface.co/datasets/nvidia/DLC-Bench)
|
44 |
+
|
45 |
+
## Inference
|
46 |
+
PyTorch
|
47 |
+
|
48 |
+
## Ethical Considerations
|
49 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
50 |
+
|
51 |
+
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|