File size: 1,305 Bytes
a1444a9 340a72c a1444a9 340a72c a1444a9 340a72c a1444a9 340a72c a1444a9 340a72c a1444a9 e1c0a24 a1444a9 340a72c a1444a9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
base_model: unsloth/gemma-3-12b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
datasets:
- reedmayhew/o4-mini-high-100x
---
# OpenAI o4 Mini High
Distilled - Gemma 3 12B
## Overview
This model is a Gemma 3 12B variant distilled from OpenAI's o4 Mini High. It was fine-tuned to emulate o4 Mini's depth and structured clarity, particularly in tasks involving complex thought, such as problem-solving, coding, and mathematics.
## Technical Details
- **Developed by:** reedmayhew
- **Base Model:** google/gemma-3-12b-it
- **Training Speed Enhancement:** Trained 2x faster with Unsloth and Huggingface's TRL library
## Training Data
The model was trained on:
- reedmayhew/o4-mini-high-100x
This dataset consists of 100 high-quality o4 Mini High completions responding to deep questions, solving math problems, and writing or analyzing code. The aim was to distill o4 Mini's analytical approach and technical versatility into a smaller, accessible model.
This Gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|