|
--- |
|
base_model: unsloth/gemma-3-12b-it |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- gemma3 |
|
license: apache-2.0 |
|
language: |
|
- en |
|
datasets: |
|
- reedmayhew/o4-mini-high-100x |
|
--- |
|
|
|
# OpenAI o4 Mini High |
|
Distilled - Gemma 3 12B |
|
|
|
## Overview |
|
This model is a Gemma 3 12B variant distilled from OpenAI's o4 Mini High. It was fine-tuned to emulate o4 Mini's depth and structured clarity, particularly in tasks involving complex thought, such as problem-solving, coding, and mathematics. |
|
|
|
## Technical Details |
|
- **Developed by:** reedmayhew |
|
- **Base Model:** google/gemma-3-12b-it |
|
- **Training Speed Enhancement:** Trained 2x faster with Unsloth and Huggingface's TRL library |
|
|
|
## Training Data |
|
The model was trained on: |
|
- reedmayhew/o4-mini-high-100x |
|
|
|
This dataset consists of 100 high-quality o4 Mini High completions responding to deep questions, solving math problems, and writing or analyzing code. The aim was to distill o4 Mini's analytical approach and technical versatility into a smaller, accessible model. |
|
|
|
This Gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|