Text Generation
Transformers
PyTorch
English
Chinese
llama
code
text-generation-inference
Inference Endpoints

Training GPU hours

#9
by dragon0116 - opened

Thanks for the great work! The model card mentions that the 34B Code-LLM was fine-tuned by QLoRA. I'm curious about the typical hardware configuration used (e.g., 4x A100-80GB GPUs?) and the total training time in hours.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment