Authors

  • Hengyu Shi
  • Boynn

Fine-tuned CLIP-ViT-bigG-14 Model

This model is a fine-tuned version based on laion/CLIP-ViT-bigG-14-laion2B-39B-b160k.

Usage Method

base_model = CLIPTextModelWithProjection.from_pretrained("kaonai/CLIP-ViT-bigG-14-laion2B-39B-b160k-sft")

Downloads last month
1
Safetensors
Model size
695M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support