Llama-3.2-3B-Instruct-Football_Match_History

Model Details

Model Description

The model is based on Llama 3.2-3B-Instruct and fine-tuned on historical football match data from five major European leagues (England, Spain, Germany, Italy, and France) spanning the 1993–94 to 2022–23 seasons. It provides team-specific match results for a given season upon request.

  • Developed by: mrp1mple
  • Finetuned from model [optional]: meta-llama/Llama-3.2-3B-Instruct

Uses

Direct Use

Obtaining historical match results for a particular football team in a specified season. Summarizing a team’s performance based on the historical data available in the dataset.

Downstream Use [optional]

As a knowledge component in football analytics dashboards. Potentially extended for predictive modeling or advanced statistics about team performances.

Out-of-Scope Use

Predicting real-time match outcomes or betting odds, as the data is purely historical and the model was not trained for predictive tasks on future games. General text generation that is unrelated to football or outside the dataset’s scope.

Bias, Risks, and Limitations

The model’s knowledge is limited to matches within the top 5 European leagues and up to the 2022–23 season. It may not accurately handle data or events outside this range. Any biases present in the original dataset (e.g., incomplete or missing matches for certain seasons) may affect the model’s responses. The model will not have real-time updates and thus may provide outdated information if asked about recent or ongoing seasons.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information is needed for further recommendations, including potential expansions of the dataset, additional fine-tuning for broader football coverage, or domain-specific disclaimers.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "mrp1mple/Llama-3.2-3B-Instruct-Football_Match_History" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = """Below is an instruction that describes a task, paired with an input. Write a response that appropriately completes the request.

Instruction: Provide match results for a specific team in a specific season.

Input: Manchester United, 2012-2013

Response: """

inputs = tokenizer(prompt, return_tensors="pt") output_tokens = model.generate(**inputs, max_new_tokens=256) print(tokenizer.decode(output_tokens[0], skip_special_tokens=True))

Training Details

Training Data

Sourced from a Kaggle dataset containing more than 110,000 matches. Includes match results from top five European leagues (England, Spain, Germany, Italy, and France) between the 1993–94 and 2022–23 seasons.

Training Procedure

Preprocessing [optional]: A JSONL file was created to pair (instruction, input, output) for fine-tuning. Each record contains: Instruction: "Provide match results for a specific team in a specific season." Input: , Output: A concatenated string of all matches played by that team in that season.

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: Training regime: LoRA-based fine-tuning (low-rank adaptation) in 4-bit precision. 1 epoch, learning_rate=1e-4.
Downloads last month
318
Safetensors
Model size
3.21B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using mrp1mple/Llama-3.2-3B-Instruct-Football_Match_History 1