YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Merged Model

Base model: AI-MO/Kimina-Autoformalizer-7B Adapter: DeepSeek-Prover-V1.5_411/results/grpo_finetune/final_model/ Created with merge_lora_quantized.py

Downloads last month
10
Safetensors
Model size
7.62B params
Tensor type
FP16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Jianyuan1/Kimina-Autoformalizer-7B-RL

Quantizations
1 model