Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF
This model was converted to GGUF format from Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
โ ๏ธTHIS MODEL IS EXPERIMENTAL!!
After more than two months since the release of superthoughts lite v1, we finally release the new version. v2
Unlike the first generation of superthoughts lite, this model is a MoE (Mixture of experts), of 4 special fine-tuned experts based off of llama-3.2-1B models.
Information
- In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
- The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
- This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
- Long context: the model supports up to 131,072 input tokens, and can generate up to 16,384 tokens.
- To enable proper reasoning, set this as the system prompt:
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
โ ๏ธ Due to the nature of an experimental model, it may fall into reasoning loops; users are responsible for all outputs from this model. This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.
If you have any questions, feel free to open a "New Discussion".
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -c 2048
- Downloads last month
- 25
8-bit
Model tree for Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF
Base model
meta-llama/Llama-3.2-1B-Instruct