GGUF version of https://huggingface.co/Kotokin/Mistral-7B-NSFWSTORY-lora

Probably NSFW.

Usage with koboldcpp: ./koboldcpp --lora Kotokin_Mistral-7B-NSFWSTORY-lora/Kotokin_Mistral-7B-NSFWSTORY-F16-LoRA.gguf --model TheBloke_zephyr-7B-beta-GGUF/zephyr-7b-beta.Q4_K_M.gguf

Downloads last month
80
GGUF
Model size
27.3M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for treasurepotion/Kotokin_Mistral-7B-NSFWSTORY-F16-LoRA-GGUF

Quantized
(1)
this model