sethuiyer commited on
Commit
318ec28
·
1 Parent(s): 59aca85

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: llama2
4
+ ---
5
+
6
+ <p align="center">
7
+ <img src="https://codeberg.org/aninokuma/DeydooAssistant/raw/branch/main/logo.webp" height="256px" alt="SynthIQ">
8
+ </p>
9
+
10
+ # SynthIQ
11
+
12
+ This is the sharded model for SynthIQ. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
13
+
14
+ **Caution**: This model is yet to be tested. If you find it useful, do let me know in the discussion!
15
+
16
+
17
+ # Yaml Config
18
+
19
+ ```yaml
20
+
21
+ slices:
22
+ - sources:
23
+ - model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
24
+ layer_range: [0, 32]
25
+ - model: uukuguy/speechless-mistral-six-in-one-7b
26
+ layer_range: [0, 32]
27
+
28
+ merge_method: slerp
29
+ base_model: mistralai/Mistral-7B-v0.1
30
+
31
+ parameters:
32
+ t:
33
+ - filter: self_attn
34
+ value: [0, 0.5, 0.3, 0.7, 1]
35
+ - filter: mlp
36
+ value: [1, 0.5, 0.7, 0.3, 0]
37
+ - value: 0.5 # fallback for rest of tensors
38
+ tokenizer_source: union
39
+
40
+ dtype: bfloat16
41
+
42
+ ```
43
+
44
+ Prompt Template : ChatML
45
+
46
+ GGUF Files
47
+
48
+ [Q4_K_M](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q4_K_M.gguf)
49
+ [Q_6_K](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q6_K.gguf)
50
+ [Q8_0](https://huggingface.co/sethuiyer/SynthIQ_GGUF/blob/main/synthiq.Q8.gguf)
51
+
52
+
53
+ License is LLama2 license as uukuguy/speechless-mistral-six-in-one-7b is llama2 license.
54
+