When I run "python setup_env.py -md models/BitNet-b1.58-2B-4T -q tl2" specifying "tl2", I get the following error and cannot create a gguf.

#5
by 86egVer03 - opened

Has anyone created a gguf specifying "tl2"?

I am currently testing the operation of "is_2" provided in this repository using "BitNet.cpp", but the answer was not useful, and I suspect that there is an error in the procedure I reproduced.

When I try to run "python setup_env.py -md models/BitNet-b1.58-2B-4T -q tl2" specifying "tl2", I get the following error and cannot create a gguf.

This may be due to insufficient memory, but if anyone has succeeded, I would like to know the memory configuration in particular.

I am using Windows 10 home 64bit with 4GB memory.

If anyone is able to do so, I would appreciate it if you could upload a gguf created with "tl2" specified to this repository.

INFO:hf-to-gguf:Loading model: BitNet-b1.58-2B-4T
Traceback (most recent call last):
 File "c:\work\models\BitNet\utils\convert-hf-to-gguf-bitnet.py", line 1165, in <module>
 main()
 File "c:\work\models\BitNet\utils\convert-hf-to-gguf-bitnet.py", line 1143, in main
 model_class = Model.from_model_architecture(hparams["architectures"][0])
 File "c:\work\models\BitNet\utils\convert-hf-to-gguf-bitnet.py", line 240, in from_model_architecture
 raise NotImplementedError(f'Architecture {arch!r} not supported!') from None
NotImplementedError: Architecture 'BitNetForCausalLM' not supported!
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment