adeelahmad commited on
Commit
5987901
·
verified ·
1 Parent(s): fa53d21

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -27
README.md CHANGED
@@ -1,4 +1,19 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  base_model: mlx-community/Llama-3.2-3B-Instruct
3
  language:
4
  - en
@@ -12,14 +27,6 @@ language:
12
  library_name: transformers
13
  license: llama3.2
14
  pipeline_tag: text-generation
15
- tags:
16
- - facebook
17
- - meta
18
- - pytorch
19
- - llama
20
- - llama-3
21
- - mlx
22
- - mlx
23
  extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
24
  \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
25
  \ for use, reproduction, distribution and modification of the Llama Materials set\
@@ -207,31 +214,46 @@ extra_gated_description: The information you provide will be collected, stored,
207
  and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
208
  extra_gated_button_content: Submit
209
  ---
 
 
 
 
 
210
 
211
- # adeelahmad/ReasonableLlama3-3B-Jr
 
212
 
213
- The Model [adeelahmad/ReasonableLlama3-3B-Jr](https://huggingface.co/adeelahmad/ReasonableLlama3-3B-Jr) was
214
- converted to MLX format from [mlx-community/Llama-3.2-3B-Instruct](https://huggingface.co/mlx-community/Llama-3.2-3B-Instruct)
215
- using mlx-lm version **0.21.4**.
 
216
 
217
- ## Use with mlx
 
 
 
218
 
219
- ```bash
220
- pip install mlx-lm
221
- ```
 
222
 
223
- ```python
224
- from mlx_lm import load, generate
 
 
225
 
226
- model, tokenizer = load("adeelahmad/ReasonableLlama3-3B-Jr")
 
 
227
 
228
- prompt = "hello"
 
 
229
 
230
- if tokenizer.chat_template is not None:
231
- messages = [{"role": "user", "content": prompt}]
232
- prompt = tokenizer.apply_chat_template(
233
- messages, add_generation_prompt=True
234
- )
235
 
236
- response = generate(model, tokenizer, prompt=prompt, verbose=True)
237
- ```
 
 
1
  ---
2
+ tags:
3
+ - facebook
4
+ - meta
5
+ - pytorch
6
+ - llama
7
+ - llama-3
8
+ - mlx
9
+ - mlx
10
+ - reasoning
11
+ - llama
12
+ - deepseek
13
+ - ollama
14
+ - chain-of-thoughts
15
+ - small-llm
16
+ - edge
17
  base_model: mlx-community/Llama-3.2-3B-Instruct
18
  language:
19
  - en
 
27
  library_name: transformers
28
  license: llama3.2
29
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
30
  extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
31
  \ Release Date: September 25, 2024\n\n“Agreement” means the terms and conditions\
32
  \ for use, reproduction, distribution and modification of the Llama Materials set\
 
214
  and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
215
  extra_gated_button_content: Submit
216
  ---
217
+ # ReasonableLlama-3B: A Fine-Tuned Reasoning Model
218
+
219
+ HF: https://huggingface.co/adeelahmad/ReasonableLlama3-3B-Jr
220
+ Ollama: https://ollama.com/adeelahmad/ReasonableLLAMA-Jr-3b
221
+
222
 
223
+ Welcome to **ReasonableLlama-3B**, a cutting-edge reasoning model built on the foundation of LLaMA-3B. This model has been carefully fine-tuned to enhance its capabilities in logical
224
+ thinking, problem-solving, and creative analysis.
225
 
226
+ ## Overview
227
+ - **Model Name**: ReasonableLlama-3B
228
+ - **Base Architecture**: LLaMA-3B (Large Language Model with 3B parameters)
229
+ - **Purpose**: Designed for tasks requiring advanced reasoning, problem-solving, and creative thinking
230
 
231
+ ## Features
232
+ - **Advanced Reasoning**: Excels in logical analysis, problem-solving, and decision-making.
233
+ - **Creative Thinking**: Generates innovative solutions and ideas.
234
+ - **Curriculum-Based Fine-Tuning**: Trained on high-quality datasets to enhance reasoning abilities.
235
 
236
+ ## Technical Details
237
+ - **Parameter Count**: 3B parameters
238
+ - **Training Process**: Fine-tuned using state-of-the-art techniques for reasoning tasks
239
+ - **Specialization**: Optimized for specific reasoning workflows and scenarios
240
 
241
+ ## Use Cases
242
+ - **Research**: Facilitates complex problem-solving and theoretical analysis.
243
+ - **Education**: Assists in creating educational examples and problem sets.
244
+ - **Problem Solving**: Helps generate innovative solutions across various domains.
245
 
246
+ ## Installation and Usage
247
+ - **Integration**: Can be integrated into existing systems via APIs or local setup.
248
+ - **Inputs**: Supports text and images, leveraging Ollama's versatile capabilities.
249
 
250
+ ## Limitations
251
+ - **Scope**: Limited to single-step reasoning; multi-hop reasoning is a current focus area.
252
+ - **Data Bias**: Caution with dataset provenance as it may reflect historical biases.
253
 
254
+ ## Contributing
255
+ Contributions welcome! Fork the project, submit issues, and pull requests on GitHub. Your insights can help shape future improvements.
 
 
 
256
 
257
+ ## Citations
258
+ - Special thanks to LLaMA's developers for providing a strong foundation.
259
+ - Acknowledgments to the community contributing to open-source AI advancements.