OpenAI o4 Mini High

Distilled - Gemma 3 12B

Overview

This model is a Gemma 3 12B variant distilled from OpenAI's o4 Mini High. It was fine-tuned to emulate o4 Mini's depth and structured clarity, particularly in tasks involving complex thought, such as problem-solving, coding, and mathematics.

Technical Details

  • Developed by: reedmayhew
  • Base Model: google/gemma-3-12b-it
  • Training Speed Enhancement: Trained 2x faster with Unsloth and Huggingface's TRL library

Training Data

The model was trained on:

  • reedmayhew/o4-mini-high-100x

This dataset consists of 100 high-quality o4 Mini High completions responding to deep questions, solving math problems, and writing or analyzing code. The aim was to distill o4 Mini's analytical approach and technical versatility into a smaller, accessible model.

This Gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
523
GGUF
Model size
11.8B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for reedmayhew/o4-mini-high-gemma3-12B-distilled

Quantized
(13)
this model

Dataset used to train reedmayhew/o4-mini-high-gemma3-12B-distilled