GGUF-IQ-Imatrix experimental quants for dreamgen/opus-v1.2-llama-3-8b.

This will have to uploaded again later.

Using a different testing config to avoid some reported issues so far and to get through the imatrix data generation.
This is experimental. Proper support and fixes should be coming in the respective projects in due time.

Downloads last month
107
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support