Upload Mistral-Small-24B-ArliAI-RPMax-v1.4-GGUF
Browse files- .gitattributes +22 -0
- BackyardAI_Banner.png +0 -0
- BackyardAI_Logo.png +0 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.BF16.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ1_M.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ1_S.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_M.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_S.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_XS.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_XXS.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_M.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_S.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_XS.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_XXS.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ4_XS.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.Q3_K_L.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.Q3_K_M.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.Q3_K_S.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.Q4_K_M.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.Q4_K_S.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.Q5_K_M.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.Q5_K_S.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.Q6_K.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.Q8_0.gguf +3 -0
- Mistral-Small-24B-ArliAI-RPMax-v1.4.imatrix +3 -0
- README.md +40 -0
.gitattributes
CHANGED
@@ -33,3 +33,25 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.BF16.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
48 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
49 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
50 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
51 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
52 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
53 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
54 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
55 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
56 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
57 |
+
Mistral-Small-24B-ArliAI-RPMax-v1.4.imatrix filter=lfs diff=lfs merge=lfs -text
|
BackyardAI_Banner.png
ADDED
![]() |
BackyardAI_Logo.png
ADDED
![]() |
Mistral-Small-24B-ArliAI-RPMax-v1.4.BF16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6892f33daf690a52be58f25d485b0013272b4ae5b70655b60e07fb1d08dfd244
|
3 |
+
size 47153517568
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ1_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b6a7f4bccff7da2f872adf4b9f7164b86a1a1215ffe1142281d5bbbe3e0f05cc
|
3 |
+
size 5750494496
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ1_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1cf83be67252fcd416eefbeb70e30b5c2849afb74ca40a50013c2e630f9b1b8e
|
3 |
+
size 5273720096
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:25d84d153d2c45226eba378f724da2959a787cf4d7889018380ce51ae2b350c6
|
3 |
+
size 8114050336
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f370f469c3969cefb28c90d7706e67ffea8ff3c465367787eec5121bf0b8245f
|
3 |
+
size 7478351136
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6cc8d962ebbb22eb97715ba655e8ed786dd9527fe8c7fc8e99d618a35f3ae42a
|
3 |
+
size 7207032096
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ2_XXS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bd1b8fd6ec6c65100a3bba09dd1a20558534622a151a4f807d59c21cc8cbc1db
|
3 |
+
size 6545118496
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:85c7cfb97676c9f762c459fb062939512616e839dffb8713a1bc39849fa4081f
|
3 |
+
size 10650948896
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b92bf458492c0eedcea3b4eb2de9cf4e33485aae31cd92cec0bf4744e8246520
|
3 |
+
size 10428126496
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:da78777b78bcfa2a2cc2b68079a702cffae184cd84185a76ba7ec45fc1ff5003
|
3 |
+
size 9907115296
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ3_XXS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c0e9aff571783f54ced07adddb92a6489443370688f8c1e579fa3d7a3aaf64f9
|
3 |
+
size 9280591136
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.IQ4_XS.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dc1e68eda88ec815f2df77394138433d0d21712a30fa597e590ccbaaddf78d6a
|
3 |
+
size 12758914336
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:36bd5aa9829e184d50f73dd86bdbd6c75793631121352c0cbc6002dbc32caa7c
|
3 |
+
size 12400759808
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5a3009826dcd32f03c7af2928e428894d3f5c5114a745bbcbdb3fd05318f5880
|
3 |
+
size 11474080768
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:55e8e6c0ba6bd88c5a492eef9dc103a5755b8f9ecc258b4aa084b60612479796
|
3 |
+
size 10400273408
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ad13b49cf9b9260519786b28d6a123607f3c75b064d04121fe285b282126f9d4
|
3 |
+
size 14333907968
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a1d02c9557b9bc9d08ff4736aed7416701d1ed3cb62ff2e757dd6f5f6b4b1756
|
3 |
+
size 13549278208
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:79082992089e49de657456b8ae639a0cf73cc09e469dc336121dcf720095c277
|
3 |
+
size 16763982848
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5c660f2c5033d0e3b3e8bc8c69bb18e60712a4afa076255d778afc5171788e38
|
3 |
+
size 16304411648
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c058052d7a97ff9d4582aefc28cd47f63beed9fdb4748b9da4d462b176f7fbb2
|
3 |
+
size 19345937408
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f5356776e1848f3b763af1d4e730e598fed9c87841b0dbc716d1346762486a1f
|
3 |
+
size 25054778368
|
Mistral-Small-24B-ArliAI-RPMax-v1.4.imatrix
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b22946fcab0cda566e7d3beebc9fd2476214cce530b609eb2c4d96aac3095c1c
|
3 |
+
size 10003584
|
README.md
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model: ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4
|
3 |
+
license: apache-2.0
|
4 |
+
model_name: Mistral-Small-24B-ArliAI-RPMax-v1.4-GGUF
|
5 |
+
quantized_by: brooketh
|
6 |
+
parameter_count: 23572403200
|
7 |
+
---
|
8 |
+
<img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;">
|
9 |
+
|
10 |
+
**<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>**
|
11 |
+
|
12 |
+
<p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p>
|
13 |
+
|
14 |
+
<p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p>
|
15 |
+
|
16 |
+
***
|
17 |
+
# Mistral Small ArliAI RPMax V1.4 24B
|
18 |
+
- **Creator:** [ArliAI](https://huggingface.co/ArliAI/)
|
19 |
+
- **Original:** [Mistral Small ArliAI RPMax V1.4 24B](https://huggingface.co/ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4)
|
20 |
+
- **Date Created:** 2025-02-09
|
21 |
+
- **Trained Context:** 32768 tokens
|
22 |
+
- **Description:** Trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
|
23 |
+
***
|
24 |
+
## What is a GGUF?
|
25 |
+
GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware.
|
26 |
+
GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight.
|
27 |
+
|
28 |
+
***
|
29 |
+
<img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;">
|
30 |
+
|
31 |
+
## Backyard AI
|
32 |
+
- Free, local AI chat application.
|
33 |
+
- One-click installation on Mac and PC.
|
34 |
+
- Automatically use GPU for maximum speed.
|
35 |
+
- Built-in model manager.
|
36 |
+
- High-quality character hub.
|
37 |
+
- Zero-config desktop-to-mobile tethering.
|
38 |
+
Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable.
|
39 |
+
**Join us on [Discord](https://discord.gg/SyNN2vC9tQ)**
|
40 |
+
***
|