This is the official PathGen-LLaVA repo for PathGen-1.6M: 1.6 Million Pathology Image-text Pairs Generation through Multi-agent Collaboration

**Dataset**

image/png image/png

Abstract

Vision Language Models (VLMs) like CLIP have attracted substantial attention in pathology, serving as backbones for applications such as zero-shot image classification and Whole Slide Image (WSI) analysis. Additionally, they can function as vision encoders when combined with large language models (LLMs) to support broader capabilities. Current efforts to train pathology VLMs rely on pathology image-text pairs from platforms like PubMed, YouTube, and Twitter, which provide limited, unscalable data with generally suboptimal image quality. In this work, we leverage large-scale WSI datasets like TCGA to extract numerous high-quality image patches. We then train a large multimodal model to generate captions for these images, creating PathGen-1.6M, a dataset containing 1.6 million high-quality image-caption pairs. Our approach involves multiple agent models collaborating to extract representative WSI patches, generating and refining captions to obtain high-quality image-text pairs. Extensive experiments show that integrating these generated pairs with existing datasets to train a pathology-specific CLIP model, PathGen-CLIP, significantly enhances its ability to analyze pathological images, with substantial improvements across nine pathology-related zero-shot image classification tasks and three whole-slide image tasks. Furthermore, we construct 200K instruction-tuning data based on PathGen-1.6M and integrate PathGen-CLIP with the Vicuna LLM to create more powerful multimodal models through instruction tuning. Overall, we provide a scalable pathway for high-quality data generation in pathology, paving the way for next-generation general pathology models.

Usage of Trained PathGen-LLaVA

The trained PathGen-LLaVA can be downloaded via PathGen-LLaVA. As we use our PathGen-CLIP-L as the PathGen-LLaVA's vision encoder, so you need to replace the vision encoder path in the config as the PathGen-CLIP-L-hf's path (where can be downloaded in this link)

This model is based on ๐ŸŒ‹ LLaVA: Large Language and Vision Assistant, so the model architecture and training scripts are heavily borrowed from https://github.com/haotian-liu/LLaVA.

You can fully adopt the LLaVA framework to conduct inferring of this model.

Downloads last month
17
Safetensors
Model size
13.4B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support