AdaptLLM commited on
Commit
91ba2b8
·
verified ·
1 Parent(s): e55bc7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -9
README.md CHANGED
@@ -17,15 +17,6 @@ This repos contains the **biomedicine MLLM developed from Llama-3.2-11B-Vision-I
17
 
18
  The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains/edit/main/README.md)
19
 
20
- We investigate domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation.
21
- **(1) Data Synthesis**: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. **Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs.**
22
- **(2) Training Pipeline**: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training.
23
- **(3) Task Evaluation**: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks.
24
-
25
- <p align='center'>
26
- <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/-Jp7pAsCR2Tj4WwfwsbCo.png" width="600">
27
- </p>
28
-
29
  ## 1. To Chat with AdaMLLM
30
 
31
  Our model architecture aligns with the base model: Llama-3.2-Vision-Instruct. We provide a usage example below, and you may refer to the official [Llama-3.2-Vision-Instruct Repository](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) for more advanced usage instructions,
 
17
 
18
  The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains/edit/main/README.md)
19
 
 
 
 
 
 
 
 
 
 
20
  ## 1. To Chat with AdaMLLM
21
 
22
  Our model architecture aligns with the base model: Llama-3.2-Vision-Instruct. We provide a usage example below, and you may refer to the official [Llama-3.2-Vision-Instruct Repository](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) for more advanced usage instructions,