SOLVED Running this v1.1 on llama.cpp

#3
by JeroenAdam - opened

Should I open an issue in the llama.cpp repo to get this working? Tried on latest llama.cpp and having this issue:

llama_model_load: loading tensors from '.\models\ggml-vicuna-13b-1.1-q4_0.bin'
llama_model_load: unknown tensor '�{ϻ��ꙛ|幷��dg�� ��?+��d�ȕw�eW8��' in model file
llama_init_from_file: failed to load model

It looks like you're not up to date. git pullthen make

Also try verifying sha256

JeroenAdam changed discussion title from Running this v1.1 on llama.ccp to Running this v1.1 on llama.cpp
JeroenAdam changed discussion title from Running this v1.1 on llama.cpp to SOLVED Running this v1.1 on llama.cpp
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment