16-bit version of weights from PharMolix/BioMedGPT-LM-7B
, for easier download / finetuning / model-merging
Code
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
m2 = AutoModelForCausalLM.from_pretrained("PharMolix/BioMedGPT-LM-7B",
torch_dtype=torch.float16,
device_map="auto")
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for monsoon-nlp/BioMedGPT-16bit
Base model
PharMolix/BioMedGPT-LM-7B