shimmyshimmer commited on
Commit
e1022d8
·
verified ·
1 Parent(s): 865b692

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -2
README.md CHANGED
@@ -5,20 +5,59 @@ language:
5
  - en
6
  library_name: transformers
7
  license: mit
8
- license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct-reasoning/resolve/main/LICENSE
9
  pipeline_tag: text-generation
10
  tags:
11
  - nlp
12
  - unsloth
13
  - math
14
  - code
 
 
15
  widget:
16
  - messages:
17
  - role: user
18
  content: How to solve 3*x^2+4*x+5=1?
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- ## Model Summary
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  Phi-4-mini-reasoning is a lightweight open model built upon synthetic data with a focus on high-quality, reasoning dense data further finetuned for more advanced math reasoning capabilities.
24
  The model belongs to the Phi-4 model family and supports 128K token context length.
 
5
  - en
6
  library_name: transformers
7
  license: mit
8
+ license_link: https://huggingface.co/microsoft/Phi-4-mini-reasoning/resolve/main/LICENSE
9
  pipeline_tag: text-generation
10
  tags:
11
  - nlp
12
  - unsloth
13
  - math
14
  - code
15
+ - phi
16
+ - phi4
17
  widget:
18
  - messages:
19
  - role: user
20
  content: How to solve 3*x^2+4*x+5=1?
21
  ---
22
+ > [!NOTE]
23
+ > This Phi-4-mini-reasoning upload also includes our [Phi-4 bug fixes](https://unsloth.ai/blog/phi4).
24
+ >
25
+ <div>
26
+ <p style="margin-bottom: 0; margin-top: 0;">
27
+ <strong>See <a href="https://huggingface.co/collections/unsloth/phi-4-all-versions-677eecf93784e61afe762afa">our collection</a> for all versions of Phi-4 including GGUF, 4-bit & 16-bit formats.</strong>
28
+ </p>
29
+ <p style="margin-top: 0;margin-bottom: 0;">
30
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
31
+ </p>
32
+ <div style="display: flex; gap: 5px; align-items: center; ">
33
+ <a href="https://github.com/unslothai/unsloth/">
34
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
35
+ </a>
36
+ <a href="https://discord.gg/unsloth">
37
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
38
+ </a>
39
+ <a href="https://docs.unsloth.ai/">
40
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
41
+ </a>
42
+ </div>
43
+ <h1 style="margin-top: 0rem;">✨ Run & Fine-tune Phi-4 with Unsloth!</h1>
44
+ </div>
45
 
46
+ - Fine-tune Phi-4 (14B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)!
47
+ - Read our Blog about Phi-4 support with our bug fixes: [unsloth.ai/blog/phi4](https://unsloth.ai/blog/phi4)
48
+ - View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
49
+ - Run & export your fine-tuned model to Ollama, llama.cpp or HF.
50
+
51
+ | Unsloth supports | Free Notebooks | Performance | Memory use |
52
+ |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
53
+ | **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
54
+ | **Qwen3 (14B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 3x faster | 70% less |
55
+ | **GRPO with Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) | 3x faster | 80% less |
56
+ | **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2x faster | 80% less |
57
+ | **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
58
+ | **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
59
+
60
+ # Phi-4-mini-reasoning
61
 
62
  Phi-4-mini-reasoning is a lightweight open model built upon synthetic data with a focus on high-quality, reasoning dense data further finetuned for more advanced math reasoning capabilities.
63
  The model belongs to the Phi-4 model family and supports 128K token context length.