Model Card for ogulcanakca/llama3-8b-epdk-domain-adapter-v1

Model Description

This repository contains a PEFT LoRA adapter fine-tuned on the meta-llama/Meta-Llama-3-8B-Instruct base model for Domain Adaptation. The adapter was trained on a Turkish dataset comprising various documents related to the Turkish Energy Market Regulatory Authority (EPDK), such as legislation, laws, communiqués, and board decisions.

The goal is to make the base Llama 3 8B Instruct model more familiar with the language, terminology, style, and concepts specific to the Turkish energy market regulation domain. When used with the base model, this adapter potentially offers improved understanding of texts from this domain and can generate more consistent outputs related to it.

Note: This repository does not contain a full model, only the LoRA weights that need to be applied to the base model. This adapter was trained using 4-bit quantization (QLoRA) via the PEFT library.

  • Developed by: ogulcanakca
  • Model type: Transformer-based Causal Language Model (Llama 3) + PEFT LoRA adapter
  • Language(s) (NLP): Turkish (tr)
  • License: Llama 3 Community License (https://llama.meta.com/llama3/license/)
  • Finetuned from model: meta-llama/Meta-Llama-3-8B-Instruct

Model Sources

Uses

Direct Use

This LoRA adapter can be used by loading it onto the meta-llama/Meta-Llama-3-8B-Instruct base model. It is expected that the base model will then better understand, summarize, or provide more consistent answers to prompts related to EPDK legislation. It can be used with the code example provided in the "How to Get Started with the Model" section below.

Note: As this adapter was trained for a very short trial period (200 steps), the domain adaptation effect is limited. More extensive training or evaluation is recommended before using it for critical tasks.

Downstream Use

This adapter could serve as a better starting point for further fine-tuning on more specific tasks related to EPDK regulations (e.g., a specialized Q&A model, text classification, summarization).

Out-of-Scope Use

  • Not suitable for use as a general-purpose chatbot (the base Instruct model is better for that).
  • Not expected to show significant improvement over the base model on topics outside Turkish energy market regulations.
  • May generate incorrect, incomplete, or outdated information. Should absolutely not be used for legal or financial advice.
  • Should not be used to generate harmful, unethical, discriminatory, or biased content.

Bias, Risks, and Limitations

  • This adapter inherits potential biases and risks present in the base meta-llama/Meta-Llama-3-8B-Instruct model.
  • The training data (EPDK documents) covers a specific time range and may contain outdated information. Information generated by the model may not be current.
  • The model may generate incorrect information ("hallucinations"), especially on topics not well-represented or contradictory in the training data.
  • Potential OCR errors or imperfections during the cleaning phase in the training data might affect the model's performance.
  • Limited Training: This adapter was trained for only 200 steps, which is very short for domain adaptation. Therefore, the adaptation effect is minimal, and the model's knowledge or stylistic alignment within this domain may not be significantly improved. This version should be considered more of a "pipeline test".

Recommendations

  • The accuracy of information generated by the model should always be verified against reliable sources.
  • Users should be aware of the model's limitations and avoid using it for critical applications (e.g., legal interpretation).
  • For more reliable results, further training with significantly more steps/epochs is recommended.

How to Get Started with the Model

You need the transformers, peft, accelerate, and bitsandbytes libraries to use this LoRA adapter. The code below loads the base model with 4-bit quantization and applies this adapter:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import os

# Ensure you are logged in to Hugging Face if the repo is private or requires Llama access
# from huggingface_hub import login
# login(token="YOUR_HF_TOKEN") # Or use huggingface-cli login

# Base model ID
base_model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
# This adapter's Hub ID
adapter_id = "ogulcanakca/llama3-8b-epdk-domain-adapter-v1"

# 4-bit Quantization Config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16 # Use bfloat16 if supported
)

# Load the base model quantized
print(f"Loading base model: {base_model_id}")
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    quantization_config=bnb_config,
    device_map="auto",
    torch_dtype=torch.bfloat16,
    trust_remote_code=True # May be needed
)

# Load the tokenizer
print("Loading tokenizer...")
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

# Load the LoRA adapter onto the base model
print(f"Loading adapter: {adapter_id}")
model = PeftModel.from_pretrained(base_model, adapter_id)
print("Adapter loaded successfully.")
model.eval() # Set model to evaluation mode

# Inference Example
prompt = "EPDK'nın elektrik piyasasındaki temel görevleri nelerdir?" # Example prompt in Turkish
messages = [
    {"role": "system", "content": "You are a helpful assistant knowledgeable about Turkish energy market regulations."},
    {"role": "user", "content": prompt}
]

# Llama 3 Instruct prompt format
print("Generating response...")
input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

# Terminator IDs to stop generation
terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

# Generate text
with torch.no_grad():
    outputs = model.generate(
        input_ids,
        max_new_tokens=512, # Max new tokens to generate
        eos_token_id=terminators,
        do_sample=True, # For more creative outputs
        temperature=0.6,
        top_p=0.9,
    )

# Decode the generated output
response_ids = outputs[0][input_ids.shape[-1]:]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)

print("\nModel Output:")
print(response_text)

Training Details

Training Data

The model was fine-tuned on cleaned text from the ogulcanakca/epdk_elektrik_piyasasi_mevzuat Hugging Face dataset. This dataset was derived from ~3300 various Turkish documents (PDF, DOCX, etc.) related to the Turkish Energy Market Regulatory Authority (EPDK), obtained through text extraction, OCR, and basic NLP cleaning processes.

Dataset Link (Raw Cleaned): https://huggingface.co/datasets/ogulcanakca/epdk_elektrik_piyasasi_mevzuat

Before training, the texts in this dataset were chunked using the meta-llama/Meta-Llama-3-8B-Instruct tokenizer into segments of 2048 tokens with an overlap of 200 tokens. The resulting domain_adaptation_data.jsonl file (or equivalent Dataset object) containing 31,271 chunks was used for training.

Training Procedure

  • Fine-tuning Type: Domain Adaptation (Causal LM objective).
  • Technique: QLoRA (4-bit NormalFloat Quantization + Low-Rank Adaptation) using the PEFT library.
  • Libraries: transformers, peft, accelerate, bitsandbytes, datasets.

Preprocessing

Cleaning steps mentioned above (whitespace, header/footer removal etc.) and tokenizer-based chunking were applied. DataCollatorForLanguageModeling was used during training.

Training Hyperparameters

  • base_model: meta-llama/Meta-Llama-3-8B-Instruct
  • quantization: 4-bit (NF4, compute_dtype=bfloat16)
  • lora_r: 16
  • lora_alpha: 32
  • lora_dropout: 0.05
  • lora_target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
  • learning_rate: 2e-4 (0.0002)
  • batch_size: 1
  • gradient_accumulation_steps: 8 (Effective batch size: 8)
  • optimizer: Paged AdamW (8-bit)
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.05
  • max_steps: 200 (Short trial run)
  • seq_length: 2048
  • precision: bf16 (mixed precision)
  • gradient_checkpointing: True

Speeds, Sizes, Times

  • Training was performed on a single GPU in Kaggle's free tier (likely T4 or P100 - exact type not logged).
  • The 200-step training run took approximately 8.5 hours. Flash Attention 2 could not be used.
  • Final training loss: ~2.151
  • The saved LoRA adapter size is relatively small (typically tens or hundreds of MBs).

Evaluation

No formal evaluation metrics were calculated for this initial trial run (200 steps). The model's performance was only observed via the training loss. After more extensive training, standard language modeling metrics (like Perplexity) or performance on domain-specific downstream tasks could be evaluated.

Testing Data, Factors & Metrics

Testing Data

EPDK'nın elektrik piyasasındaki temel görevleri nelerdir?
---------------------------------------------------------
--- Model Response ---
EPDK, Elektrik Piyasası Kanunu'nun 5 inci maddesine göre, elektrik piyasasının işletilmesi ve gelişimi için gerekli ortamı oluşturmak, piyasada rekabetin gelişmesi için gerekli düzenlemeleri yapmak, serbest tüketim ve üretim faaliyetlerine ilişkin usul ve esasları belirlemek, denetlemek ve lisanslama işlemlerini yapmak, görevleri arasındadır. EPDK'nın elektrik piyasasındaki temel görevleri şu şekilde özetlenebilir:
1) Elektrik piyasasının gelişimi için gerekli düzenlemeleri yapmak: EPDK, elektrik piyasasının gelişimi için gerekli düzenlemeleri yapmakla yükümlüdür. Bu kapsamda, EPDK tarafından hazırlanan düzenlemeler, Kurul tarafından onaylanmak suretiyle yürürlüğe girer.
2) Piyasa işletimine ilişkin düzenlemeleri yapmak: EPDK, elektrik piyasasının işletilmesi için gerekli düzenlemeleri yapmakla yükümlüdür. Bu kapsamda, EPDK tarafından hazırlanan düzenlemeler, Kurul tarafından onaylanmak suretiyle yürürlüğe girer.
3) Lisans alma ve lisanslama işlemlerini yapmak: EPDK, lisans alma ve lisanslama işlemlerini yapmakla yükümlüdür. EPDK, lisans alma ve lisanslama işlemlerini yapmak için gerekli işlemleri yapar, lisansları verir, tadil ve iptal işlemlerini gerçekleştirir.
4) Denetlemek: EPDK, lisans sahibi tüzel kişilerden, lisanslarına ilişkin yükümlülüklerini yerine getirmelerini sağlamak amacıyla gerekli denetimleri yapmakla yükümlüdür. EPDK, denetimleri yapmak için gerekli işlemleri yapar, denetim sonuçlarını raporlar ve Kurul tarafından onaylanmak suretiyle yayımlar.
5) Elektrik piyasasının işletilmesi için gerekli ortamı oluşturmak: EPDK, elektrik piyasasının işletilmesi için gerekli ortamı oluşturmakla yükümlüdür. Bu kapsamda, EPDK tarafından hazırlanan düzenlemeler, Kurul tarafından onaylanmak suretiyle yürürlüğe girer.
6) Piyasa işletimine ilişkin usul ve esasları belirlemek: EPDK, elektrik piyasasının işletilmesi için gerekli usul ve esasları belirlemekle yükümlüdür. Bu kapsamda, EPDK tarafından hazırlanan düzenlemeler, Kurul tarafından onaylanmak suretiyle yürürlüğe girer.
7) Serbest tüketim ve üretim faaliyetlerine ilişkin usul ve esasları belirlemek: EPDK, serbest tüketim ve üretim faaliyetlerine ilişkin usul ve esasları belirlemekle yükümlüdür. Bu kapsamda, EPDK tarafından hazırlanan düzenlemeler, Kurul tarafından onaylanmak suretiyle yürürlüğe girer.
8) Lisans sahibi tüzel kişilerin faaliyetlerini izlemek: EPDK, lisans sahibi tüzel kişilerin faaliyetlerini izlemekle yükümlüdür. Bu kapsamda, EPDK, lisans sahibi tüzel kişilerin faaliyetlerini izlemek için gerekli işlemleri yapar.
9) Lisans sahibi tüzel kişilerin faaliyetlerine ilişkin incelemeleri yapmak: EPDK, lisans sahibi tüzel kişilerin faaliyetlerine ilişkin incelemeleri yapmakla yükümlüdür. Bu kapsamda, EPDK, lisans sahibi tüzel kişilerin faaliyetlerine ilişkin incelemeleri yapmak için gerekli işlemleri yapar.
10) Lisans sahibi tüzel kişilerin faaliyetlerine ilişkin inceleme sonuçlarını raporlamak: EPDK, lisans sahibi tüzel kişilerin faaliyetlerine ilişkin inceleme sonuçlarını raporlamakla yüküml
--- End of Response ---

Summary

The short 200-step training demonstrated that the fine-tuning pipeline works, but was insufficient for significant domain adaptation. A slight decrease in training loss was observed.

Environmental Impact

  • Hardware Type: Kaggle GPU (Likely T4 or P100 tier)
  • Hours used: ~8.5 hours (for 200 steps)
  • Cloud Provider: Kaggle (using Google Cloud infrastructure)
  • Compute Region: Unknown (Managed by Kaggle)
  • Carbon Emitted: Can be estimated using the Machine Learning Impact calculator, but estimating accurately requires specific GPU power consumption data, which is difficult to obtain for Kaggle free tiers.

Technical Specifications

Model Architecture and Objective

The base model relies on the Llama 3 architecture (Transformer decoder-only). This adapter adds LoRA layers without changing the base model weights, trained with a Causal Language Modeling objective for domain adaptation.

Compute Infrastructure

Hardware

Kaggle Notebook environment with a single NVIDIA GPU (likely T4 or P100).

Software

  • PyTorch (torch==2.5.1 recommended based on Kaggle env)
  • Transformers (transformers)
  • PEFT (peft==0.15.2)
  • Accelerate (accelerate)
  • BitsAndBytes (bitsandbytes)
  • Datasets (datasets)
  • Python 3.10/3.11 (Kaggle default)

Citation

Please cite the base Llama 3 model and the relevant PEFT/LoRA techniques:

@misc{ogulcanakca_llama3_epdk_adapter_2025,
  author       = {ogulcanakca},
  title        = {llama3-8b-epdk-domain-adapter-v1: A LoRA Adapter for Llama 3 8B Instruct Domain-Adapted to Turkish EPDK Legislation},
  year         = {2025},
  publisher    = {Hugging Face},
  version      = {v1},
  url          = {https://huggingface.co/ogulcanakca/llama3-8b-epdk-domain-adapter-v1}
}

Model Card Contact

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ogulcanakca/llama3-8b-epdk-domain-adapter-v1

Adapter
(850)
this model