Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF

This model was converted to GGUF format from Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427 using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

โš ๏ธTHIS MODEL IS EXPERIMENTAL!!

image/png After more than two months since the release of superthoughts lite v1, we finally release the new version. v2 Unlike the first generation of superthoughts lite, this model is a MoE (Mixture of experts), of 4 special fine-tuned experts based off of llama-3.2-1B models.

Information

  • In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
  • The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
  • This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
  • Long context: the model supports up to 131,072 input tokens, and can generate up to 16,384 tokens.
  • To enable proper reasoning, set this as the system prompt:
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).

โš ๏ธ Due to the nature of an experimental model, it may fall into reasoning loops; users are responsible for all outputs from this model. This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.

If you have any questions, feel free to open a "New Discussion".

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -c 2048
Downloads last month
25
GGUF
Model size
3.91B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF

Collection including Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF