Model Card for Model ID

Model Details

In this work, we fine tuned the base model OuteAI/Lite-Oute-1-300M-Instruct on a tweet sentiment dataset cardiffnlp/tweet_eval dataset to determine tweets tonality in one of the three classes: positive, neutral or negative.

Model Description

We used a system prompt to instruct the model:

SYSTEM PROMPT:

You are a tweet sentiment classifier. For each tweet input, analyze its sentiment and output exactly one word: "negative", "neutral", or "positive". Do not include any extra text.

But the model is not trained to return only the sentiment name.
So we designed a custom LoRA Linear layer to achive PEFT of this model, by replacing the k_proj and v_proj layers to modify the initial model.

Training Details

This model was trained with batch_size=16, rank = 8, alpha = 16, learning_rate = 5e-6 for 1 epoch.

The model achieved 0.51 macro f1-score on the test dataset, comparing with the initial model which is 0.06.

Comparison

==========
User Prompt: "Ben Smith / Smith (concussion) remains out of the lineup Thursday, Curtis #NHL #SJ"
Label: neutral
Before: The tweet "Ben Smith / Smith (concussion) remains out of the
After: neutral neutral ralph neutral ral

==========
User Prompt: @user Alciato: Bee will invest 150 million in January, another 200 in the Summer and plans to bring Messi by 2017"
Label: positive
Before: The tweet "Alciato: Bee will invest 150
After: neutral ralitive ralitive ralitive

Summary

LoRA fine-tuning allows the model to learn in a subspace, thereby adapting the model to new tasks.

Downloads last month
24
Safetensors
Model size
300M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for liuhailin0123/llm-course-hw3-lora

Finetuned
(32)
this model

Dataset used to train liuhailin0123/llm-course-hw3-lora

Collection including liuhailin0123/llm-course-hw3-lora