ViT-L CLIP initialized from FARE2: https://huggingface.co/chs20/fare2-clip. The Text encoder is finetuned with the FARE loss using the charmer attack with $\rho=50$ and $k=1$.

Downloads last month
13
Safetensors
Model size
428M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RCLIP/CLIP-ViT-L-rho50-k1-FARE2

Base model

chs20/fare2-clip
Finetuned
(1)
this model

Dataset used to train RCLIP/CLIP-ViT-L-rho50-k1-FARE2

Collections including RCLIP/CLIP-ViT-L-rho50-k1-FARE2