Nexesenex commited on
Commit
bb51fd2
·
verified ·
1 Parent(s): 9090279

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -10,6 +10,20 @@ library_name: transformers
10
  tags:
11
  - mergekit
12
  - merge
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  ---
15
  # merge
 
10
  tags:
11
  - mergekit
12
  - merge
13
+ ---
14
+ # about
15
+
16
+ Original name : Llama_3.x_70b_Dolnemhertulimtess_v1.0
17
+
18
+ This model is essentially a Llama 3.1 smart brick, to be used in second level merges.
19
+
20
+ I finally abandonned the 3 stages "smart merges" (like Smarteaz) because they are dilluting too much the source models used with the merge-stock technique, even if the benches and PPL were good, and it was dilluted furthermore into the level 4/5 merges I was doing afterwards. It was too much of a soup.
21
+
22
+ This time, for the base, I used a Llama 3.0 Dolphin 2.9.1/Llama 3.3 instruct abliterated merge, in order to get both the capabilities of each model, and notably Dolphin, not ported on Llama 70b 3.1 or 3.3 by CognitiveComputations.
23
+
24
+ Then, I added the best 'instructions oriented' finetunes I know, simple as that.
25
+
26
+ The model is highly uncensored, quite intelligent, and can be used as a standalone.
27
 
28
  ---
29
  # merge