File size: 1,098 Bytes
71fa46b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
base_model: HuggingFaceTB/SmolVLM-256M-Instruct
library_name: transformers
model_name: SmolVLM-256M-Instruct-SFT-CLEVR
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SmolVLM-256M-Instruct-SFT-CLEVR
This model is a fine-tuned version of [HuggingFaceTB/SmolVLM-256M-Instruct](https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.47.0
- Pytorch: 2.5.1+cu121
- Datasets: 3.0.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |