
eaddario/Dolphin3.0-R1-Mistral-24B-GGUF
Text Generation
•
Updated
•
139
•
1
Nicely done!
Purely out of curiosity, what kind of rig (spec) are you using to to run 600B param models locally?
pydantic==2.10.6
to requirements.txt
or upgrade Gradio to the latest version.torch>=2.2.0
for Zero GPU space).transformers<=4.49.0
for spaces using Transformers or Diffusers).huggingface_hub
to the old version (huggingface_hub==0.25.2
for if an error like cached_download
is not available occurs or inference does not work properly)WORKDIR
in Dockerfile
may cause the application to fail to start with error 137. (Docker Spaces, https://discuss.huggingface.co/t/error-code-137-cache-error/152177)pydantic==2.10.6
: