morriszms commited on
Commit
6a08f9e
Β·
verified Β·
1 Parent(s): 99e72e1

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ L3-8B-Everything-COT-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ L3-8B-Everything-COT-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ L3-8B-Everything-COT-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ L3-8B-Everything-COT-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ L3-8B-Everything-COT-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ L3-8B-Everything-COT-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ L3-8B-Everything-COT-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ L3-8B-Everything-COT-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ L3-8B-Everything-COT-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ L3-8B-Everything-COT-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ L3-8B-Everything-COT-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ L3-8B-Everything-COT-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
L3-8B-Everything-COT-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d53cb6e347dda0b260a550d7110efaff05395c692411ff322646a63473ad709
3
+ size 3179131392
L3-8B-Everything-COT-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e2dbf9de1735e6267f4819a54160df79d8ed0eced235aa6eb29fb8ca52c71ae2
3
+ size 4321956352
L3-8B-Everything-COT-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e134a732b14263f51bd0e3be2c1c3b20e03bafb3d19844cd5e16f6b9c639c27
3
+ size 4018917888
L3-8B-Everything-COT-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18e8b1d5086ec80376f7608bc943a948ee6093c4fbad11de3cddfcb3bccd1729
3
+ size 3664499200
L3-8B-Everything-COT-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0857d183a1ad695a42fb37cff2eb69b582ce3a1dfcc45cb2459081ce876905d4
3
+ size 4661211648
L3-8B-Everything-COT-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:420bdbac1edfda3bebba90749b0e3f7b3fe02c237444e8c0b5212e69bf2629a7
3
+ size 4920734208
L3-8B-Everything-COT-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03c6a57277ac9afa0df2b940799b158613f1f0399a8c1b829e22a07a567ffb98
3
+ size 4692668928
L3-8B-Everything-COT-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f0bb71164bab55741df84b22f84cc6fd34edfc2ba6074bd34e2bfbec74e2f31
3
+ size 5599293952
L3-8B-Everything-COT-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4253ff8a5addf91db808a397937b4db4be82ae05736fdb90bdc3389ea6386aa
3
+ size 5732987392
L3-8B-Everything-COT-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7331f8d5605ae59d234180f9fe1ba15a68c75fd082ad7674e52ac8dbbdb5276e
3
+ size 5599293952
L3-8B-Everything-COT-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fd291c0ce05cdfc0d3e316b92f08436abe0ffcd40be6867c477d9082a503f01
3
+ size 6596006400
L3-8B-Everything-COT-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea7785db5ec7ab0d7b5f79e6015d47b943c8ca46cdb92d157751499772f6cbfb
3
+ size 8540770816
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - llm
4
+ - llama
5
+ - llama3
6
+ - TensorBlock
7
+ - GGUF
8
+ base_model: FPHam/L3-8B-Everything-COT
9
+ ---
10
+
11
+ <div style="width: auto; margin-left: auto; margin-right: auto">
12
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
13
+ </div>
14
+ <div style="display: flex; justify-content: space-between; width: 100%;">
15
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
16
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
17
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
18
+ </p>
19
+ </div>
20
+ </div>
21
+
22
+ ## FPHam/L3-8B-Everything-COT - GGUF
23
+
24
+ This repo contains GGUF format model files for [FPHam/L3-8B-Everything-COT](https://huggingface.co/FPHam/L3-8B-Everything-COT).
25
+
26
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
27
+
28
+ ## Our projects
29
+ <table border="1" cellspacing="0" cellpadding="10">
30
+ <tr>
31
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
32
+ <th style="font-size: 25px;">TensorBlock Studio</th>
33
+ </tr>
34
+ <tr>
35
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
36
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
37
+ </tr>
38
+ <tr>
39
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
40
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
41
+ </tr>
42
+ <tr>
43
+ <th>
44
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
45
+ display: inline-block;
46
+ padding: 8px 16px;
47
+ background-color: #FF7F50;
48
+ color: white;
49
+ text-decoration: none;
50
+ border-radius: 6px;
51
+ font-weight: bold;
52
+ font-family: sans-serif;
53
+ ">πŸ‘€ See what we built πŸ‘€</a>
54
+ </th>
55
+ <th>
56
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
57
+ display: inline-block;
58
+ padding: 8px 16px;
59
+ background-color: #FF7F50;
60
+ color: white;
61
+ text-decoration: none;
62
+ border-radius: 6px;
63
+ font-weight: bold;
64
+ font-family: sans-serif;
65
+ ">πŸ‘€ See what we built πŸ‘€</a>
66
+ </th>
67
+ </tr>
68
+ </table>
69
+
70
+ ## Prompt template
71
+
72
+ ```
73
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
74
+
75
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
76
+
77
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
78
+ ```
79
+
80
+ ## Model file specification
81
+
82
+ | Filename | Quant type | File Size | Description |
83
+ | -------- | ---------- | --------- | ----------- |
84
+ | [L3-8B-Everything-COT-Q2_K.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
85
+ | [L3-8B-Everything-COT-Q3_K_S.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
86
+ | [L3-8B-Everything-COT-Q3_K_M.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
87
+ | [L3-8B-Everything-COT-Q3_K_L.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
88
+ | [L3-8B-Everything-COT-Q4_0.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
89
+ | [L3-8B-Everything-COT-Q4_K_S.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
90
+ | [L3-8B-Everything-COT-Q4_K_M.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
91
+ | [L3-8B-Everything-COT-Q5_0.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
92
+ | [L3-8B-Everything-COT-Q5_K_S.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
93
+ | [L3-8B-Everything-COT-Q5_K_M.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
94
+ | [L3-8B-Everything-COT-Q6_K.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
95
+ | [L3-8B-Everything-COT-Q8_0.gguf](https://huggingface.co/tensorblock/FPHam_L3-8B-Everything-COT-GGUF/blob/main/L3-8B-Everything-COT-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
96
+
97
+
98
+ ## Downloading instruction
99
+
100
+ ### Command line
101
+
102
+ Firstly, install Huggingface Client
103
+
104
+ ```shell
105
+ pip install -U "huggingface_hub[cli]"
106
+ ```
107
+
108
+ Then, downoad the individual model file the a local directory
109
+
110
+ ```shell
111
+ huggingface-cli download tensorblock/FPHam_L3-8B-Everything-COT-GGUF --include "L3-8B-Everything-COT-Q2_K.gguf" --local-dir MY_LOCAL_DIR
112
+ ```
113
+
114
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
115
+
116
+ ```shell
117
+ huggingface-cli download tensorblock/FPHam_L3-8B-Everything-COT-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
118
+ ```