Ud quants please🥺
#5
by
Ainonake
- opened
If what was done to deepseek is possible with this model too, then it will be possible to run it locally.
If what was done to deepseek is possible with this model too, then it will be possible to run it locally.
Working on it!!!
gguf_init_from_file_impl: invalid magic characters: '', expected 'GGUF'
llama_model_load: error loading model: llama_model_loader: failed to load model from /mnt/e/q3i.gguf
Seems like q4_K_M 30gb doesn't work? Will try to update llamacpp