GGUF
English
Spanish
TensorBlock
GGUF
morriszms commited on
Commit
0256ba8
Β·
verified Β·
1 Parent(s): a8f971d

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Barcenas-Mistral-7b-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Barcenas-Mistral-7b-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Barcenas-Mistral-7b-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Barcenas-Mistral-7b-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Barcenas-Mistral-7b-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Barcenas-Mistral-7b-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Barcenas-Mistral-7b-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Barcenas-Mistral-7b-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Barcenas-Mistral-7b-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Barcenas-Mistral-7b-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Barcenas-Mistral-7b-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Barcenas-Mistral-7b-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Barcenas-Mistral-7b-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9c95282bac3d9352b18614cdc4e047b078a5ea9d19cb70246f5e42f54c0a36a
3
+ size 2719242528
Barcenas-Mistral-7b-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:009508b5ac0c205a350e41dc40d7c1f73ebcc678ec6ba8599ba496ea6cacff1f
3
+ size 3822024992
Barcenas-Mistral-7b-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7b2ff1ac232cb0ff0f5b56f91233a88ddfbf66c1b557589fc0c07123dab3189
3
+ size 3518986528
Barcenas-Mistral-7b-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60be9c076206345ec83b28e3f306b3dcbd21fdf23e294c4548f617891fc0eee8
3
+ size 3164567840
Barcenas-Mistral-7b-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d8e19a1ce3f0598db14ebd5d93aa686eba0489aba4ce535896054631b401272
3
+ size 4108917024
Barcenas-Mistral-7b-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4fb7affe3a92c0ad92550b7986ed09d7f56087f843f4df52f10a1b203686878
3
+ size 4368439584
Barcenas-Mistral-7b-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4cfdfa8033de29e70e475303a5f3397ff07342d71ba3adbd2059dcd9774118b
3
+ size 4140374304
Barcenas-Mistral-7b-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c9809444b06169deb2255159854c3fd1d2512717fd4b63b5f1996567e19d512
3
+ size 4997716256
Barcenas-Mistral-7b-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6bc610f4b34eda0b7debaf65587a5cebb281050488944d4e1456c6dde9f24eb7
3
+ size 5131409696
Barcenas-Mistral-7b-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f434a02e36a888f7c024949ee99e4bc495f3bb1c68bf10d022fb5cad6d6fb58b
3
+ size 4997716256
Barcenas-Mistral-7b-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:583994ac7d4f015dd3879cf6dc218f0d363cb8e2fdbbfb3d46b49bb8331f373b
3
+ size 5942065440
Barcenas-Mistral-7b-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b1b18a15abad19fd82593e49d892d9888618735b424cd29fde484259692e6fe
3
+ size 7695857952
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - Danielbrdz/Barcenas-lmsys-Dataset
5
+ language:
6
+ - en
7
+ - es
8
+ tags:
9
+ - TensorBlock
10
+ - GGUF
11
+ base_model: Danielbrdz/Barcenas-Mistral-7b
12
+ ---
13
+
14
+ <div style="width: auto; margin-left: auto; margin-right: auto">
15
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
16
+ </div>
17
+ <div style="display: flex; justify-content: space-between; width: 100%;">
18
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
19
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
20
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
21
+ </p>
22
+ </div>
23
+ </div>
24
+
25
+ ## Danielbrdz/Barcenas-Mistral-7b - GGUF
26
+
27
+ This repo contains GGUF format model files for [Danielbrdz/Barcenas-Mistral-7b](https://huggingface.co/Danielbrdz/Barcenas-Mistral-7b).
28
+
29
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5165](https://github.com/ggml-org/llama.cpp/commit/1d735c0b4fa0551c51c2f4ac888dd9a01f447985).
30
+
31
+ ## Our projects
32
+ <table border="1" cellspacing="0" cellpadding="10">
33
+ <tr>
34
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
35
+ <th style="font-size: 25px;">TensorBlock Studio</th>
36
+ </tr>
37
+ <tr>
38
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
39
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
40
+ </tr>
41
+ <tr>
42
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
43
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
44
+ </tr>
45
+ <tr>
46
+ <th>
47
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
48
+ display: inline-block;
49
+ padding: 8px 16px;
50
+ background-color: #FF7F50;
51
+ color: white;
52
+ text-decoration: none;
53
+ border-radius: 6px;
54
+ font-weight: bold;
55
+ font-family: sans-serif;
56
+ ">πŸ‘€ See what we built πŸ‘€</a>
57
+ </th>
58
+ <th>
59
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
60
+ display: inline-block;
61
+ padding: 8px 16px;
62
+ background-color: #FF7F50;
63
+ color: white;
64
+ text-decoration: none;
65
+ border-radius: 6px;
66
+ font-weight: bold;
67
+ font-family: sans-serif;
68
+ ">πŸ‘€ See what we built πŸ‘€</a>
69
+ </th>
70
+ </tr>
71
+ </table>
72
+
73
+ ## Prompt template
74
+
75
+ ```
76
+ Unable to determine prompt format automatically. Please check the original model repository for the correct prompt format.
77
+ ```
78
+
79
+ ## Model file specification
80
+
81
+ | Filename | Quant type | File Size | Description |
82
+ | -------- | ---------- | --------- | ----------- |
83
+ | [Barcenas-Mistral-7b-Q2_K.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
84
+ | [Barcenas-Mistral-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
85
+ | [Barcenas-Mistral-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
86
+ | [Barcenas-Mistral-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
87
+ | [Barcenas-Mistral-7b-Q4_0.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
88
+ | [Barcenas-Mistral-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
89
+ | [Barcenas-Mistral-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
90
+ | [Barcenas-Mistral-7b-Q5_0.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
91
+ | [Barcenas-Mistral-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
92
+ | [Barcenas-Mistral-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
93
+ | [Barcenas-Mistral-7b-Q6_K.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
94
+ | [Barcenas-Mistral-7b-Q8_0.gguf](https://huggingface.co/tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF/blob/main/Barcenas-Mistral-7b-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |
95
+
96
+
97
+ ## Downloading instruction
98
+
99
+ ### Command line
100
+
101
+ Firstly, install Huggingface Client
102
+
103
+ ```shell
104
+ pip install -U "huggingface_hub[cli]"
105
+ ```
106
+
107
+ Then, downoad the individual model file the a local directory
108
+
109
+ ```shell
110
+ huggingface-cli download tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF --include "Barcenas-Mistral-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
111
+ ```
112
+
113
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
114
+
115
+ ```shell
116
+ huggingface-cli download tensorblock/Danielbrdz_Barcenas-Mistral-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
117
+ ```