Model Card for llm-course-hw3-tinyllama-qlora

This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the cardiffnlp/tweet_eval dataset to determine tweets tonality in one of the three classes: positive, neutral or negative.

It was finetuned with 4-bit QLoRA to make training more memory and time efficient. Low-rank finetuning were applied to Q, K, V, O and up projection layers.

Training procedure

This model was trained with batch_size=32, rank = 24, alpha = 48, lora_dropout=0.07, learning_rate = 5e-4 and cosine scheduler on cardiffnlp/tweet_eval for a quarter of an epoch.

The model achieved 0.50 f1-score on the test dataset.

Comparison

Before:

"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin" -> "negative"

Correct: "positive"

After:

"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin" -> "positive"

Usage

from safetensors.torch import load_file
from huggingface_hub import hf_hub_download

model = AutoModelForCausalLM.from_pretrained(f"{REPO_NAME}-tinyllama-qlora", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(f"{REPO_NAME}-tinyllama-qlora")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"

QLoRA_saved_model_accuracy = eval(model, dataset["test"], tokenizer)
print(f"Accuracy after tinyllama QLoRA training: {QLoRA_saved_model_accuracy}")

Framework versions

  • Transformers: 4.47.0
  • Pytorch: 2.5.1+cu121
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for xiryss/llm-course-hw3-tinyllama-qlora

Finetuned
(302)
this model

Dataset used to train xiryss/llm-course-hw3-tinyllama-qlora

Collection including xiryss/llm-course-hw3-tinyllama-qlora