---
base_model:
- trillionlabs/Trillion-7B-preview
pipeline_tag: text-generation
---
[](https://devquasar.com)
Quantized version of: [trillionlabs/Trillion-7B-preview](https://huggingface.co/trillionlabs/Trillion-7B-preview)
*Note*
I've forced llama.cpp to use llama-bpe tokenizer, as the checksum of the model's original tokenizer was not present in the converter code.
The model has produced meaningful output in english and korean (that is what I've tried).
'Make knowledge free for everyone'