no run
#1
by
rakmik
- opened
Can
mobiuslabsgmbh/Llama-2-70b-hf-2bit_g16_s128-HQQ
run in Colab T4?
This is a very old model which is not compatible with >=v0.2. You can either install an old version of hqq, or you can quantize the model on-the-fly in transformers.
https://github.com/mobiusml/hqq/?tab=readme-ov-file#transformers-
mobicham
changed discussion status to
closed
thank you
Have you converted?
Llama-3.3-70b
to
2bit
?????????????
or
Llama-3.1-70b
to
2bit
mobiuslabsgmbh/Hermes-3-Llama-3.1-70B_4bitgs64_hqq
42gb
big
Can it be run in Colab T4
or
kaggle 32gb
vram