ymcki commited on
Commit
e6f82b9
·
verified ·
1 Parent(s): eeefb65

Upload 8 files

Browse files
Files changed (1) hide show
  1. README.md +17 -5
README.md CHANGED
@@ -31,18 +31,30 @@ Original model: https://huggingface.co/google/gemma-2-2b-jpn-it
31
 
32
  Note that this model does not support a System prompt.
33
 
34
- This is abliterated model of [`google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) using the
35
  [method](https://medium.com/@mlabonne/uncensor-any-llm-with-abliteration-d30148b7d43e)
36
  described by mlabonne.
37
 
38
- Layer 18 of the original model was chosen for abliteration.
39
- I also created another layer 17 abliterated model for comparison.
40
 
41
  It is uploaded here to be evaluated by the LLM Leaderboard to see how brain damaged it
42
  is compared to the original model.
43
 
44
  ORPO fine tuning is currently underway to see if it can regain its sanity. You can play with this model first or wait until I am done with the fine tuning.
45
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ## How to run this model
47
 
48
  ```py
@@ -50,7 +62,7 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
50
  import transformers
51
  import torch
52
 
53
- model_id = "gemma-2-2b-jpn-it-abliterated-18"
54
  dtype = torch.bfloat16
55
 
56
  tokenizer = AutoTokenizer.from_pretrained(model_id)
@@ -76,7 +88,7 @@ pip install -U "huggingface_hub[cli]"
76
  Then, you can target the specific file you want:
77
 
78
  ```
79
- huggingface-cli download ymcki/gemma-2-2b-jpn-it-abliterated-18 --include "*" --local-dir ./
80
  ```
81
 
82
  ## Credits
 
31
 
32
  Note that this model does not support a System prompt.
33
 
34
+ This is abliterated model of [google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) using the
35
  [method](https://medium.com/@mlabonne/uncensor-any-llm-with-abliteration-d30148b7d43e)
36
  described by mlabonne.
37
 
38
+ Layer 17 of the original model was chosen for abliteration.
39
+ I also created another layer 18 abliterated model for comparison.
40
 
41
  It is uploaded here to be evaluated by the LLM Leaderboard to see how brain damaged it
42
  is compared to the original model.
43
 
44
  ORPO fine tuning is currently underway to see if it can regain its sanity. You can play with this model first or wait until I am done with the fine tuning.
45
 
46
+ ## Benchmark (100.0*raw scores only)
47
+
48
+ Click on the model name go to the raw score json generated by Open LLM Leaderboard.
49
+
50
+ | Model | Average | IFEval | BHH | Math Lv5 | MUSR | MMLU-PRO |
51
+ | ----- | ------- | ------ | ----|--------- | ---- | -------- |
52
+ | [gemma-2-2b-jpn-it](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/google/gemma-2-2b-jpn-it/results_2024-10-15T15-21-39.173019.json) | 30.82 | 54.11 | 41.43 | 0.0 | 27.52 | 37.17 | 24.67 |
53
+ | [gemma-2-2b-jpn-it-abliterated-17](https://huggingface.co/datasets/open-llm-leaderboard/results/raw/main/ymcki/gemma-2-2b-jpn-it-abliterated-17/results_2024-10-16T07-58-03.781979.json) 16.74 | 0.0 | 29.13 | 0.0 | 25.92 | 33.73 | 11.68 |
54
+ | gemma-2-2b-jpn-it-abliterated-17 | TBD | TBD | TBD | TBD | TBD | TBD |
55
+
56
+ Indeed, it is quite dumbed down relative to the original.
57
+
58
  ## How to run this model
59
 
60
  ```py
 
62
  import transformers
63
  import torch
64
 
65
+ model_id = "gemma-2-2b-jpn-it-abliterated-17"
66
  dtype = torch.bfloat16
67
 
68
  tokenizer = AutoTokenizer.from_pretrained(model_id)
 
88
  Then, you can target the specific file you want:
89
 
90
  ```
91
+ huggingface-cli download ymcki/gemma-2-2b-jpn-it-abliterated-17 --include "*" --local-dir ./
92
  ```
93
 
94
  ## Credits