Uploaded model
dataset = load_dataset("NobodyExistsOnTheInternet/toxicqa", split="train")
dataset2 = load_dataset("Nitral-AI/Discover-Instruct-6k-Distilled-R1-70b-ShareGPT-Think-Tags", split="train")
dataset3 = load_dataset("Nitral-Archive/RP_Alignment-ShareGPT", split="train")
dataset4 = load_dataset("mlfoundations-dev/s1K-with-deepseek-r1-sharegpt", split="train")
- Developed by: bunnycore
- License: apache-2.0
- Finetuned from model : unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for bunnycore/Gemma-3-4b-toxic-r1-lora
Base model
google/gemma-3-4b-pt
Finetuned
google/gemma-3-4b-it
Quantized
unsloth/gemma-3-4b-it-unsloth-bnb-4bit