synk commited on
Commit
6713bd3
·
verified ·
1 Parent(s): 16de282

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -22
README.md CHANGED
@@ -5,6 +5,7 @@ datasets:
5
  - Sao10K/Claude-3-Opus-Instruct-15K
6
  - Sao10K/Short-Storygen-v2
7
  - Sao10K/c2-Logs-Filtered
 
8
  language:
9
  - en
10
  license: cc-by-nc-4.0
@@ -15,26 +16,6 @@ tags:
15
 
16
  # synk/8B-Stheno-Diverse-v3.2-MLX
17
 
18
- The Model [synk/8B-Stheno-Diverse-v3.2-MLX](https://huggingface.co/synk/8B-Stheno-Diverse-v3.2-MLX) was converted to MLX format from [synk/L3-8B-Stheno-v3.2-MLX](https://huggingface.co/synk/L3-8B-Stheno-v3.2-MLX) using mlx-lm version **0.19.3**.
19
 
20
- ## Use with mlx
21
-
22
- ```bash
23
- pip install mlx-lm
24
- ```
25
-
26
- ```python
27
- from mlx_lm import load, generate
28
-
29
- model, tokenizer = load("synk/8B-Stheno-Diverse-v3.2-MLX")
30
-
31
- prompt="hello"
32
-
33
- if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
34
- messages = [{"role": "user", "content": prompt}]
35
- prompt = tokenizer.apply_chat_template(
36
- messages, tokenize=False, add_generation_prompt=True
37
- )
38
-
39
- response = generate(model, tokenizer, prompt=prompt, verbose=True)
40
- ```
 
5
  - Sao10K/Claude-3-Opus-Instruct-15K
6
  - Sao10K/Short-Storygen-v2
7
  - Sao10K/c2-Logs-Filtered
8
+ - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned-20k
9
  language:
10
  - en
11
  license: cc-by-nc-4.0
 
16
 
17
  # synk/8B-Stheno-Diverse-v3.2-MLX
18
 
19
+ This model is [Stheno-v3.2 MLX](https://huggingface.co/synk/L3-8B-Stheno-v3.2-MLX) fine tuned on data from Claude Sonnet, where only the most _diverse_ responses were given. This was fine tuned on [Sonnet3.5-SlimOrcaDedupCleaned-20k](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned-20k) by Gryphe.
20
 
21
+ It was fine-tuned using LoRA with MLX for 600 iterations.