Add files using upload-large-folder tool
Browse files- .gitattributes +6 -0
- README.md +29 -63
- config.json +33 -0
- phi-4-reasoning-UD-IQ1_M.gguf +3 -0
- phi-4-reasoning-UD-IQ1_S.gguf +3 -0
- phi-4-reasoning-UD-IQ2_M.gguf +3 -0
- phi-4-reasoning-UD-IQ3_XXS.gguf +3 -0
- phi-4-reasoning-UD-Q2_K_XL.gguf +3 -0
- phi-4-reasoning-UD-Q4_K_XL.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
phi-4-reasoning-UD-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
phi-4-reasoning-UD-IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
phi-4-reasoning-UD-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
phi-4-reasoning-UD-Q2_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
phi-4-reasoning-UD-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
phi-4-reasoning-UD-Q4_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,63 +1,31 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
-
- microsoft/Phi-4-reasoning
|
4 |
-
language:
|
5 |
-
- en
|
6 |
-
library_name: transformers
|
7 |
license: mit
|
8 |
license_link: https://huggingface.co/microsoft/Phi-4-reasoning/resolve/main/LICENSE
|
|
|
|
|
|
|
|
|
9 |
pipeline_tag: text-generation
|
10 |
tags:
|
11 |
-
-
|
12 |
- unsloth
|
|
|
13 |
- math
|
14 |
- code
|
15 |
-
-
|
16 |
-
-
|
|
|
|
|
|
|
|
|
17 |
widget:
|
18 |
- messages:
|
19 |
- role: user
|
20 |
-
content:
|
|
|
21 |
---
|
22 |
-
|
23 |
-
|
24 |
-
>
|
25 |
-
<div>
|
26 |
-
<p style="margin-bottom: 0; margin-top: 0;">
|
27 |
-
<strong>See <a href="https://huggingface.co/collections/unsloth/phi-4-all-versions-677eecf93784e61afe762afa">our collection</a> for all versions of Phi-4 including GGUF, 4-bit & 16-bit formats.</strong>
|
28 |
-
</p>
|
29 |
-
<p style="margin-top: 0;margin-bottom: 0;">
|
30 |
-
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
|
31 |
-
</p>
|
32 |
-
<div style="display: flex; gap: 5px; align-items: center; ">
|
33 |
-
<a href="https://github.com/unslothai/unsloth/">
|
34 |
-
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
|
35 |
-
</a>
|
36 |
-
<a href="https://discord.gg/unsloth">
|
37 |
-
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
|
38 |
-
</a>
|
39 |
-
<a href="https://docs.unsloth.ai/">
|
40 |
-
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
|
41 |
-
</a>
|
42 |
-
</div>
|
43 |
-
<h1 style="margin-top: 0rem;">✨ Run & Fine-tune Phi-4 with Unsloth!</h1>
|
44 |
-
</div>
|
45 |
-
|
46 |
-
- Fine-tune Phi-4 (14B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)!
|
47 |
-
- Read our Blog about Phi-4 support with our bug fixes: [unsloth.ai/blog/phi4](https://unsloth.ai/blog/phi4)
|
48 |
-
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
|
49 |
-
- Run & export your fine-tuned model to Ollama, llama.cpp or HF.
|
50 |
-
|
51 |
-
| Unsloth supports | Free Notebooks | Performance | Memory use |
|
52 |
-
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
|
53 |
-
| **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
|
54 |
-
| **Qwen3 (14B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 3x faster | 70% less |
|
55 |
-
| **GRPO with Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4_(14B)-GRPO.ipynb) | 3x faster | 80% less |
|
56 |
-
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2x faster | 80% less |
|
57 |
-
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
|
58 |
-
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
|
59 |
-
|
60 |
-
# Phi-4-reasoning-plus
|
61 |
|
62 |
[Phi-4-reasoning Technical Report](https://aka.ms/phi-reasoning/techreport)
|
63 |
|
@@ -66,7 +34,7 @@ widget:
|
|
66 |
| | |
|
67 |
|-------------------------|-------------------------------------------------------------------------------|
|
68 |
| **Developers** | Microsoft Research |
|
69 |
-
| **Description** | Phi-4-reasoning
|
70 |
| **Architecture** | Base model same as previously released Phi-4, 14B parameters, dense decoder-only Transformer model |
|
71 |
| **Inputs** | Text, best suited for prompts in the chat format |
|
72 |
| **Context length** | 32k tokens |
|
@@ -76,7 +44,7 @@ widget:
|
|
76 |
| **Outputs** | Generated text in response to the input. Model responses have two sections, namely, a reasoning chain-of-thought block followed by a summarization block |
|
77 |
| **Dates** | January 2025 – April 2025 |
|
78 |
| **Status** | Static model trained on an offline dataset with cutoff dates of March 2025 and earlier for publicly available data |
|
79 |
-
| **Release date** | April 30, 2025
|
80 |
| **License** | MIT |
|
81 |
|
82 |
## Intended Use
|
@@ -94,7 +62,7 @@ Our training data is a mixture of Q&A, chat format data in math, science, and co
|
|
94 |
|
95 |
### Benchmark Datasets
|
96 |
|
97 |
-
We evaluated Phi-4-reasoning
|
98 |
|
99 |
Reasoning tasks:
|
100 |
|
@@ -130,11 +98,11 @@ General-purpose benchmarks:
|
|
130 |
|
131 |
### Approach
|
132 |
|
133 |
-
Phi-4-reasoning
|
134 |
|
135 |
### Safety Evaluation and Red-Teaming
|
136 |
|
137 |
-
Prior to release, Phi-4-reasoning
|
138 |
|
139 |
Please refer to the technical report for more details on safety alignment.
|
140 |
|
@@ -168,7 +136,7 @@ At the high-level overview of the model quality on representative benchmarks. Fo
|
|
168 |
| Toxigen Discriminative<br><small>Toxic category<br>Neutral category</small> | <br>72.6<br>90.0 | <br>86.7<br>84.7 | <br>77.3<br>90.5 | <br>85.4<br>88.7 | <br>87.6<br>85.1 |
|
169 |
| PhiBench 2.21 | 58.2 | 70.6 | 74.2 | 78.0| 72.4 |
|
170 |
|
171 |
-
Overall, Phi-4-reasoning
|
172 |
|
173 |
## Usage
|
174 |
|
@@ -176,8 +144,6 @@ Overall, Phi-4-reasoning and Phi-4-reasoning-plus, with only 14B parameters, per
|
|
176 |
|
177 |
Inference is better with `temperature=0.8`, `top_p=0.95`, and `do_sample=True`. For more complex queries, set the maximum number of tokens to 32k to allow for longer chain-of-thought (CoT).
|
178 |
|
179 |
-
*Phi-4-reasoning-plus has shown strong performance on reasoning-intensive tasks. In our experiments, we extended its maximum number of tokens to 64k, and it handled longer sequences with promising results, maintaining coherence and logical consistency over extended inputs. This makes it a compelling option to explore for tasks that require deep, multi-step reasoning or extensive context.*
|
180 |
-
|
181 |
### Input Formats
|
182 |
|
183 |
Given the nature of the training data, always use ChatML template with the following system prompt for inference:
|
@@ -195,8 +161,8 @@ What is the derivative of x^2?<|im_end|>
|
|
195 |
```python
|
196 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
197 |
|
198 |
-
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-4-reasoning
|
199 |
-
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-4-reasoning
|
200 |
|
201 |
messages = [
|
202 |
{"role": "system", "content": "You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:"},
|
@@ -217,16 +183,16 @@ print(tokenizer.decode(outputs[0]))
|
|
217 |
### With `vllm`
|
218 |
|
219 |
```bash
|
220 |
-
vllm serve microsoft/Phi-4-reasoning
|
221 |
```
|
222 |
|
223 |
-
*Phi-4-reasoning
|
224 |
|
225 |
## Responsible AI Considerations
|
226 |
|
227 |
-
Like other language models, Phi-4-reasoning
|
228 |
|
229 |
-
* **Quality of Service:** The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. Phi-4-reasoning
|
230 |
|
231 |
* **Representation of Harms & Perpetuation of Stereotypes:** These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
|
232 |
|
@@ -236,7 +202,7 @@ Like other language models, Phi-4-reasoning-plus can potentially behave in ways
|
|
236 |
|
237 |
* **Election Information Reliability:** The model has an elevated defect rate when responding to election-critical queries, which may result in incorrect or unauthoritative election critical information being presented. We are working to improve the model's performance in this area. Users should verify information related to elections with the election authority in their region.
|
238 |
|
239 |
-
* **Limited Scope for Code:** Majority of Phi-4-reasoning
|
240 |
|
241 |
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) that have advanced guardrails is highly recommended. Important areas for consideration include:
|
242 |
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
2 |
license: mit
|
3 |
license_link: https://huggingface.co/microsoft/Phi-4-reasoning/resolve/main/LICENSE
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
base_model:
|
7 |
+
- microsoft/phi-4-reasoning
|
8 |
pipeline_tag: text-generation
|
9 |
tags:
|
10 |
+
- phi
|
11 |
- unsloth
|
12 |
+
- nlp
|
13 |
- math
|
14 |
- code
|
15 |
+
- chat
|
16 |
+
- conversational
|
17 |
+
- reasoning
|
18 |
+
inference:
|
19 |
+
parameters:
|
20 |
+
temperature: 0
|
21 |
widget:
|
22 |
- messages:
|
23 |
- role: user
|
24 |
+
content: What is the derivative of x^2?
|
25 |
+
library_name: transformers
|
26 |
---
|
27 |
+
|
28 |
+
# Phi-4-reasoning Model Card
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
[Phi-4-reasoning Technical Report](https://aka.ms/phi-reasoning/techreport)
|
31 |
|
|
|
34 |
| | |
|
35 |
|-------------------------|-------------------------------------------------------------------------------|
|
36 |
| **Developers** | Microsoft Research |
|
37 |
+
| **Description** | Phi-4-reasoning is a state-of-the-art open-weight reasoning model finetuned from Phi-4 using supervised fine-tuning on a dataset of chain-of-thought traces and reinforcement learning. The supervised fine-tuning dataset includes a blend of synthetic prompts and high-quality filtered data from public domain websites, focused on math, science, and coding skills as well as alignment data for safety and Responsible AI. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. |
|
38 |
| **Architecture** | Base model same as previously released Phi-4, 14B parameters, dense decoder-only Transformer model |
|
39 |
| **Inputs** | Text, best suited for prompts in the chat format |
|
40 |
| **Context length** | 32k tokens |
|
|
|
44 |
| **Outputs** | Generated text in response to the input. Model responses have two sections, namely, a reasoning chain-of-thought block followed by a summarization block |
|
45 |
| **Dates** | January 2025 – April 2025 |
|
46 |
| **Status** | Static model trained on an offline dataset with cutoff dates of March 2025 and earlier for publicly available data |
|
47 |
+
| **Release date** | April 30, 2025 |
|
48 |
| **License** | MIT |
|
49 |
|
50 |
## Intended Use
|
|
|
62 |
|
63 |
### Benchmark Datasets
|
64 |
|
65 |
+
We evaluated Phi-4-reasoning using the open-source [Eureka](https://github.com/microsoft/eureka-ml-insights) evaluation suite and our own internal benchmarks to understand the model's capabilities. More specifically, we evaluate our model on:
|
66 |
|
67 |
Reasoning tasks:
|
68 |
|
|
|
98 |
|
99 |
### Approach
|
100 |
|
101 |
+
Phi-4-reasoning has adopted a robust safety post-training approach via supervised fine-tuning (SFT). This approach leverages a variety of both open-source and in-house generated synthetic prompts, with LLM-generated responses that adhere to rigorous Microsoft safety guidelines, e.g., User Understanding and Clarity, Security and Ethical Guidelines, Limitations, Disclaimers and Knowledge Scope, Handling Complex and Sensitive Topics, Safety and Respectful Engagement, Confidentiality of Guidelines and Confidentiality of Chain-of-Thoughts.
|
102 |
|
103 |
### Safety Evaluation and Red-Teaming
|
104 |
|
105 |
+
Prior to release, Phi-4-reasoning followed a multi-faceted evaluation approach. Quantitative evaluation was conducted with multiple open-source safety benchmarks and in-house tools utilizing adversarial conversation simulation. For qualitative safety evaluation, we collaborated with the independent AI Red Team (AIRT) at Microsoft to assess safety risks posed by Phi-4-reasoning in both average and adversarial user scenarios. In the average user scenario, AIRT emulated typical single-turn and multi-turn interactions to identify potentially risky behaviors. The adversarial user scenario tested a wide range of techniques aimed at intentionally subverting the model's safety training including grounded-ness, jailbreaks, harmful content like hate and unfairness, violence, sexual content, or self-harm, and copyright violations for protected material. We further evaluate models on Toxigen, a benchmark designed to measure bias and toxicity targeted towards minority groups.
|
106 |
|
107 |
Please refer to the technical report for more details on safety alignment.
|
108 |
|
|
|
136 |
| Toxigen Discriminative<br><small>Toxic category<br>Neutral category</small> | <br>72.6<br>90.0 | <br>86.7<br>84.7 | <br>77.3<br>90.5 | <br>85.4<br>88.7 | <br>87.6<br>85.1 |
|
137 |
| PhiBench 2.21 | 58.2 | 70.6 | 74.2 | 78.0| 72.4 |
|
138 |
|
139 |
+
Overall, Phi-4-reasoning, with only 14B parameters, performs well across a wide range of reasoning tasks, outperforming significantly larger open-weight models such as DeepSeek-R1 distilled 70B model and approaching the performance levels of full DeepSeek R1 model. We also test the models on multiple new reasoning benchmarks for algorithmic problem solving and planning, including 3SAT, TSP, and BA-Calendar. These new tasks are nominally out-of-domain for the models as the training process did not intentionally target these skills, but the models still show strong generalization to these tasks. Furthermore, when evaluating performance against standard general abilities benchmarks such as instruction following or non-reasoning tasks, we find that our new models improve significantly from Phi-4, despite the post-training being focused on reasoning skills in specific domains.
|
140 |
|
141 |
## Usage
|
142 |
|
|
|
144 |
|
145 |
Inference is better with `temperature=0.8`, `top_p=0.95`, and `do_sample=True`. For more complex queries, set the maximum number of tokens to 32k to allow for longer chain-of-thought (CoT).
|
146 |
|
|
|
|
|
147 |
### Input Formats
|
148 |
|
149 |
Given the nature of the training data, always use ChatML template with the following system prompt for inference:
|
|
|
161 |
```python
|
162 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
163 |
|
164 |
+
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-4-reasoning")
|
165 |
+
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-4-reasoning", device_map="auto", torch_dtype="auto")
|
166 |
|
167 |
messages = [
|
168 |
{"role": "system", "content": "You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:"},
|
|
|
183 |
### With `vllm`
|
184 |
|
185 |
```bash
|
186 |
+
vllm serve microsoft/Phi-4-reasoning --enable-reasoning --reasoning-parser deepseek_r1
|
187 |
```
|
188 |
|
189 |
+
*Phi-4-reasoning is also supported out-of-the-box by Ollama, llama.cpp, and any Phi-4 compatible framework.*
|
190 |
|
191 |
## Responsible AI Considerations
|
192 |
|
193 |
+
Like other language models, Phi-4-reasoning can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
|
194 |
|
195 |
+
* **Quality of Service:** The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. Phi-4-reasoning is not intended to support multilingual use.
|
196 |
|
197 |
* **Representation of Harms & Perpetuation of Stereotypes:** These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
|
198 |
|
|
|
202 |
|
203 |
* **Election Information Reliability:** The model has an elevated defect rate when responding to election-critical queries, which may result in incorrect or unauthoritative election critical information being presented. We are working to improve the model's performance in this area. Users should verify information related to elections with the election authority in their region.
|
204 |
|
205 |
+
* **Limited Scope for Code:** Majority of Phi-4-reasoning training data is based in Python and uses common packages such as `typing`, `math`, `random`, `collections`, `datetime`, `itertools`. If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
|
206 |
|
207 |
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) that have advanced guardrails is highly recommended. Important areas for consideration include:
|
208 |
|
config.json
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"Phi3ForCausalLM"
|
4 |
+
],
|
5 |
+
"attention_bias": false,
|
6 |
+
"attention_dropout": 0.0,
|
7 |
+
"bos_token_id": 100257,
|
8 |
+
"embd_pdrop": 0.0,
|
9 |
+
"eos_token_id": 100265,
|
10 |
+
"hidden_act": "silu",
|
11 |
+
"hidden_size": 5120,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 17920,
|
14 |
+
"max_position_embeddings": 32768,
|
15 |
+
"model_type": "phi3",
|
16 |
+
"num_attention_heads": 40,
|
17 |
+
"num_hidden_layers": 40,
|
18 |
+
"num_key_value_heads": 10,
|
19 |
+
"original_max_position_embeddings": 32768,
|
20 |
+
"pad_token_id": 100349,
|
21 |
+
"partial_rotary_factor": 1.0,
|
22 |
+
"resid_pdrop": 0.0,
|
23 |
+
"rms_norm_eps": 1e-05,
|
24 |
+
"rope_scaling": null,
|
25 |
+
"rope_theta": 500000,
|
26 |
+
"sliding_window": null,
|
27 |
+
"tie_word_embeddings": false,
|
28 |
+
"torch_dtype": "bfloat16",
|
29 |
+
"transformers_version": "4.52.0.dev0",
|
30 |
+
"unsloth_fixed": true,
|
31 |
+
"use_cache": true,
|
32 |
+
"vocab_size": 100352
|
33 |
+
}
|
phi-4-reasoning-UD-IQ1_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f55fb045c9ba7336f13749df541cf7362c237e28f2a6ca9f6d37b46f8e8bdd34
|
3 |
+
size 3831086368
|
phi-4-reasoning-UD-IQ1_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9720b37a70eb0de816ba73dfaf0ac4ca4bca46c4987f93fcbcedc434da9ae86d
|
3 |
+
size 3580206368
|
phi-4-reasoning-UD-IQ2_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:23f3309c64eae62d2df541aa0607380386f82a095aca9123bad7fd813b7a5efa
|
3 |
+
size 5257620768
|
phi-4-reasoning-UD-IQ3_XXS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1cd1b09c6af6042a39cbaf735b2eb8e0d1a8a29d5a7612f79a701d6c8d437bd0
|
3 |
+
size 5890862368
|
phi-4-reasoning-UD-Q2_K_XL.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cd7eee72f43f2463b739edef73aff39f0b00f9344d3c37ebd3d3249c96d0ed4d
|
3 |
+
size 5803453728
|
phi-4-reasoning-UD-Q4_K_XL.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f22b7b6d1b663c2074bea4ee2fbbd635e4918c873b0018cea073e430e7539427
|
3 |
+
size 8947338528
|