Brief Descrption

Llama 2 7B base model fine-tuned on 1000 random samples from the Alpaca GPT-4 instruction dataset using QLORA and 4-bit quantization.

This is a demo of how an LLM can be fine-tuned in such low-resource environment as Google Colab.

You can find more details about the experiment in the Colab notebook used to fine-tune the model here.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support