QuantFactory/Llama-3-8B-TKK-Elite-V1.0-GGUF

This is quantized version of tarikkaankoc7/Llama-3-8B-TKK-Elite-V1.0 created using llama.cpp

Model Description

Llama-3-TKK-8B-Elite-V1.0

Llama-3-TKK-8B-Elite-V1.0, a generative model built upon the LLaMA 8B architecture, represents my individual undergraduate graduation project. Developed during my studies in Software Engineering at Malatya Turgut ร–zal University, this project stands as a culmination of my academic endeavors. I extend my sincere appreciation to Assoc. Prof. Dr. Harun Bingรถl, who served as both my department chair and thesis advisor. His invaluable guidance, unwavering support, and mentorship have significantly shaped my educational and research experiences. I am deeply grateful for his continuous encouragement, insightful feedback, and unwavering dedication. Thank you, Dr. Bingรถl...

image/png

Model Details

Training took 133 hours and 59 minutes for a total of 37,420 steps and was conducted on 8 Tesla V100 GPUs.

  • Base Model: LLaMA 8B based LLM
  • Model Developers: Tarฤฑk Kaan Koรง
  • Thesis Advisor: Assoc. Prof. Dr. Harun Bingรถl
  • Input: Text only
  • Output: Text only
  • Training Dataset: Cleaned Turkish raw data with 1 million raw instruction Turkish data, private
  • Training Method: Fine-tuning with LORA

LORA Fine-Tuning Configuration

image/png

  • lora_alpha: 16
  • lora_dropout: 0.1
  • r: 64
  • bias: none
  • task_type: CAUSAL_LM

Example Usage:

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, pipeline
import torch

model_id = "tarikkaankoc7/TKK-LLaMA3-8B-Elite-V1.0"

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

tokenizer = AutoTokenizer.from_pretrained(
    model_id,
    trust_remote_code=True
)

streamer = TextStreamer(tokenizer)

text_generation_pipeline = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    model_kwargs={"torch_dtype": torch.bfloat16},
    streamer=streamer
)

messages = [
    {"role": "system", "content": "Sen yardฤฑmsever bir yapay zeka asistanฤฑsฤฑn ve kullanฤฑcฤฑlarฤฑn verdiฤŸi talimatlara doฤŸrultusunda en iyi cevabฤฑ รผretmeye รงalฤฑลŸฤฑyorsun."},
    {"role": "user", "content": "Leonardo da Vinci'nin en รผnlรผ tablosu hangisidir?"}
]

prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

terminators = [
    tokenizer.eos_token_id
]

outputs = text_generation_pipeline(
    prompt,
    max_new_tokens=2048,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.95
)

print(outputs[0]["generated_text"])

Output:

Leonardo da Vinci'nin en รผnlรผ tablosu Mona Lisa'dฤฑr.
Downloads last month
41
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantFactory/Llama-3-8B-TKK-Elite-V1.0-GGUF

Quantized
(2)
this model