Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,62 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
<div align="left" style="line-height: 1;">
|
6 |
+
<a href="https://bagel-ai.org/" target="_blank" style="margin: 2px;">
|
7 |
+
<img alt="Homepage" src="https://img.shields.io/badge/BAGEL-Homepage-a468fe?color=a468fe&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
8 |
+
</a>
|
9 |
+
<a href="https://github.com/ByteDance-Seed/BAGEL/blob/main/BAGEL-Technical-Report.pdf" target="_blank" style="margin: 2px;">
|
10 |
+
<img alt="Technical Report" src="https://img.shields.io/badge/(upcoming)-Technical%20Report-brightgreen?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
|
11 |
+
</a>
|
12 |
+
|
13 |
+
<a href="https://github.com/bytedance-seed/BAGEL" target="_blank" style="margin: 2px;">
|
14 |
+
<img alt="Github" src="https://img.shields.io/badge/Github-Repo-536af5?color=536af5&logo=github" style="display: inline-block; vertical-align: middle;"/>
|
15 |
+
</a>
|
16 |
+
</div>
|
17 |
+
|
18 |
+
# 🥯 BAGEL • Unified Model for Multimodal Understanding and Generation
|
19 |
+
> We present **BAGEL**, an open‑source multimodal foundation model with 7B active parameters (14B total) trained on large‑scale interleaved multimodal data. BAGEL outperforms the current top‑tier open‑source VLMs like Qwen2.5-VL and InternVL-2.5 on standard multimodal understanding leaderboards, and delivers text‑to‑image quality that is competitive with strong specialist generators such as SD3.
|
20 |
+
Moreover, BAGEL demonstrates superior qualitative results in classical image‑editing scenarios than the leading open-source models. More importantly, it extends to free-form visual manipulation, multiview synthesis, and world navigation, capabilities that constitute "world-modeling" tasks beyond the scope of previous image-editing models.
|
21 |
+
Below is a showcase of BAGEL's qualitative performance.
|
22 |
+
|
23 |
+
|
24 |
+
|
25 |
+
|
26 |
+
## 📊 Benchmarks
|
27 |
+
### 1. Visual Understanding
|
28 |
+
| Model (≈ 7 B class) | MMBench-C ↑ | MMMU ↑ | MM-Vet ↑ | MathVista ↑ |
|
29 |
+
| ------------------- | ----------: | -------: | -------: | ----------: |
|
30 |
+
| Janus-Pro-7B | 79.2 | 41.0 | 50.0 | – |
|
31 |
+
| Qwen2.5-VL | 83.5 | **58.6** | 67.1 | – |
|
32 |
+
| **BAGEL (ours)** | **85.0** | 55.3 | **67.2** | **73.1** |
|
33 |
+
### 2. Text-to-Image Generation · GenEval
|
34 |
+
| Model | Overall ↑ |
|
35 |
+
| ------------ | --------- |
|
36 |
+
| FLUX-1-dev | 0.82 |
|
37 |
+
| SD3-Medium | 0.74 |
|
38 |
+
| Janus-Pro-7B | 0.80 |
|
39 |
+
| **BAGEL** | **0.88** |
|
40 |
+
### 3. Image Editing
|
41 |
+
| Benchmark | Step1X-Edit | Gemini-2-exp. | **BAGEL** | **BAGEL + CoT** |
|
42 |
+
| ------------------------ | ----------: | ------------: | --------: | --------------: |
|
43 |
+
| **GEdit-Bench-EN** (↑) | 7.09 | – | **7.36** | – |
|
44 |
+
| **IntelligentBench** (↑) | 14.9 | 57.6 | 44.0 | **55.3** |
|
45 |
+
|
46 |
+
## License
|
47 |
+
|
48 |
+
BAGEL is licensed under the Apache 2.0 license. It is finetuned from [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct), and uses the [FLUX.1-schnell VAE model](https://huggingface.co/black-forest-labs/FLUX.1-schnell) and the [siglip-so400m-14-980-flash-attn2-navit](https://huggingface.co/HuggingFaceM4/siglip-so400m-14-980-flash-attn2-navit) model, all under Apache 2.0.
|
49 |
+
|
50 |
+
## ✍️ Citation
|
51 |
+
```bibtex
|
52 |
+
@article{deng2025bagel,
|
53 |
+
title = {Emerging Properties in Unified Multimodal Pretraining},
|
54 |
+
author = {Deng, Chaorui and Zhu, Deyao and Li, Kunchang and Gou, Chenhui and Li, Feng and Wang, Zeyu and Zhong, Shu and Yu, Weihao and Nie, Xiaonan and Song, Ziang and Shi, Guang and Fan, Haoqi},
|
55 |
+
journal = {arXiv preprint arXiv:TODO},
|
56 |
+
year = {2025}
|
57 |
+
}
|
58 |
+
```
|
59 |
+
|
60 |
+
|
61 |
+
|
62 |
+
|