---
base_model: ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4
license: apache-2.0
model_name: Mistral-Small-24B-ArliAI-RPMax-v1.4-GGUF
quantized_by: brooketh
parameter_count: 23572403200
---
**
The official library of GGUF format models for use in the local AI chat app, Backyard AI.
**Download Backyard AI here to get started.
Request Additional models at r/LLM_Quants.
*** # Mistral Small ArliAI RPMax V1.4 24B - **Creator:** [ArliAI](https://huggingface.co/ArliAI/) - **Original:** [Mistral Small ArliAI RPMax V1.4 24B](https://huggingface.co/ArliAI/Mistral-Small-24B-ArliAI-RPMax-v1.4) - **Date Created:** 2025-02-09 - **Trained Context:** 32768 tokens - **Description:** Trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. ***