mSTAR / README.md
Wangyh's picture
Update README.md
a686546 verified
metadata
license: cc-by-nc-nd-4.0
tags:
  - pathology
  - pytorch
  - self-supervised
extra_gated_prompt: >-
  You agree to not use the model to conduct experiments that cause harm to human
  subjects.
extra_gated_fields:
  Country: country
  Full name: text
  Current affiliation: text
  Type of Affiliation:
    type: select
    options:
      - Academia
      - Industry
      - label: Other
        value: other
  Current and official institutional email: text
  I want to use this model for: text
  I agree to use this model for non-commercial, academic purposes only: checkbox
library_name: timm

mSTAR: A Multimodal Knowledge-enhanced Whole-slide Pathology Foundation Model

NOTE: As our paper is currently undergoing the peer-review process, the model weights are not publicly available at this time. For specific inquiries or special circumstances, please contact Yingxue Xu at [email protected].

Abstract: Remarkable strides in computational pathology have been made in the task-agnostic foundation model that advances the performance of a wide array of downstream clinical tasks. Despite the promising performance, there are still several challenges. First, prior works have resorted to either vision-only or vision-captions data, disregarding invaluable pathology reports and gene expression profiles which respectively offer distinct knowledge for versatile clinical applications. Second, the current progress in pathology FMs predominantly concentrates on the patch level, where the restricted context of patch-level pretraining fails to capture whole-slide patterns. Here we curated the largest multimodal dataset consisting of H&E diagnostic whole slide images and their associated pathology reports and RNA-Seq data, resulting in 26,169 slide-level modality pairs from 10,275 patients across 32 cancer types. To leverage these data for CPath, we propose a novel whole-slide pretraining paradigm which injects multimodal knowledge at the whole-slide context into the pathology FM, called Multimodal Self-TAught PRetraining (mSTAR). The proposed paradigm revolutionizes the workflow of pretraining for CPath, which enables the pathology FM to acquire the whole-slide context. To our knowledge, this is the first attempt to incorporate multimodal knowledge at the slide level for enhancing pathology FMs, expanding the modelling context from unimodal to multimodal knowledge and from patch-level to slide-level. To systematically evaluate the capabilities of mSTAR, extensive experiments including slide-level unimodal and multimodal applications, are conducted across 7 diverse types of tasks on 43 subtasks, resulting in the largest spectrum of downstream tasks. The average performance in various slide-level applications consistently demonstrates significant performance enhancements for mSTAR compared to SOTA FMs.

How to use

import timm
from torchvision import transforms

model = timm.create_model(
    'hf-hub:Wangyh/mSTAR',
    pretrained=True,
    init_values=1e-5, dynamic_img_size=True
    )
transform = transforms.Compose(
    [
        transforms.Resize(224),
        transforms.ToTensor(),
        transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
    ]
)

or

import timm
import torch
from torchvision import transforms
model = timm.create_model(
    "vit_large_patch16_224", img_size=224, patch_size=16, init_values=1e-5, num_classes=0, dynamic_img_size=True
)
model.load_state_dict(torch.load("pytorch_model.bin"), strict=True)
transform = transforms.Compose(
    [
        transforms.Resize(224),
        transforms.ToTensor(),
        transforms.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
    ]
)