Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- starriver030515/FUSION-Pretrain-10M
|
4 |
+
- starriver030515/FUSION-Finetune-12M
|
5 |
+
base_model:
|
6 |
+
- meta-llama/Llama-3.1-8B-Instruct
|
7 |
+
- google/siglip-so400m-patch14-384
|
8 |
+
license: apache-2.0
|
9 |
+
---
|
10 |
+
# Model Card for FUSION
|
11 |
+
|
12 |
+
This is the checkpoint after Stage 1 training of FUSION-LLaMA3.1-8B.
|
13 |
+
|
14 |
+
## Model Details
|
15 |
+
|
16 |
+
**Model Description**
|
17 |
+
|
18 |
+
<img src="https://raw.githubusercontent.com/starriver030515/FUSION/main/images/encoder.jpg" alt="encoder" width="1000px">
|
19 |
+
|
20 |
+
<img src="https://raw.githubusercontent.com/starriver030515/FUSION/main/images/decoder.jpg" alt="decoder" width="1000px">
|
21 |
+
|
22 |
+
FUSION is a family of multimodal large language models that adopts a fully integrated vision-language architecture, enabling comprehensive and fine-grained cross-modal understanding. In contrast to prior approaches that primarily perform shallow or late-stage modality fusion during the LLM decoding phase, FUSION achieves deep, dynamic integration across the entire vision-language processing pipeline.
|
23 |
+
|
24 |
+
To enable this, FUSION utilizes Text-Guided Unified Vision Encoding, which incorporates textual context directly into the vision encoder. This design allows for pixel-level vision-language alignment and facilitates early-stage cross-modal interaction.
|
25 |
+
|
26 |
+
During decoding, FUSION employs Context-Aware Recursive Alignment Decoding strategy. This component dynamically aggregates and refines visual features based on the evolving textual context at each decoding step, allowing the model to capture question-level semantics with high precision.
|
27 |
+
|
28 |
+
To further enhance alignment and reduce the semantic gap between modalities, FUSION integrates Dual-Supervised Semantic Mapping Loss, which provides simultaneous supervision in both visual and textual embedding spaces. This dual-path guidance strengthens the consistency and semantic coherence of the fused representations.
|
29 |
+
|
30 |
+
**Base Model**
|
31 |
+
|
32 |
+
**LLM**: [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
|
33 |
+
|
34 |
+
**Vision Encoder**: [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)
|
35 |
+
|
36 |
+
|
37 |
+
## Training Details
|
38 |
+
|
39 |
+
**Training Strategies**
|
40 |
+
|
41 |
+
FUSION is trained with a three-stage training framework, ensuring comprehensive alignment and integration between visual and linguistic modalities.
|
42 |
+
|
43 |
+
- **Stage1: Foundational Semantic Alignment**: We pretrain the vision encoder using extensive image-caption datasets to establish precise semantic alignment be- tween visual and textual representations.
|
44 |
+
- **Stage1.5: Contextual Multimodal Fusion**: In contrast to Stage 1, this intermediate stage incorporates various types of QA data along with image-caption pairs. This phase is designed to enhance the model’s adaptability in aligning vision and language representations across a broad spectrum of scenarios.
|
45 |
+
- **Stage2: Visual Instruction Tuning**: At this stage, we expose the model to various visual tasks, enabling it to answer downstream vision-related questions effectively.
|
46 |
+
|
47 |
+
**Training Data**
|
48 |
+
|
49 |
+
- [10M FUSION Alignment Data](https://huggingface.co/datasets/starriver030515/FUSION-Pretrain-10M) For Stage1
|
50 |
+
- [12M FUSION Curated Instruction Tuning Data](https://huggingface.co/datasets/starriver030515/FUSION-Finetune-12M) For Stage1.5 and Stage2
|
51 |
+
|
52 |
+
## Performance
|
53 |
+
|
54 |
+
<img src="https://raw.githubusercontent.com/starriver030515/FUSION/main/images/performance.jpg" alt="performance" width="1000px">
|
55 |
+
|
56 |
+
**Where to send questions or comments about the model:**
|
57 |
+
|
58 |
+
https://github.com/starriver030515/FUSION/issues
|
59 |
+
|
60 |
+
## Paper or resources for more information
|
61 |
+
|
62 |
+
- https://github.com/starriver030515/FUSION
|
63 |
+
- Coming soon~
|