Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

iproskurina
/
opt-125m-GPTQ-4bit-g128

Text Generation
English
opt
gptq
4-bit precision
Model card Files Files and versions Community
opt-125m-GPTQ-4bit-g128
Ctrl+K
Ctrl+K
  • 1 contributor
History: 8 commits
iproskurina's picture
iproskurina
Update README.md to include GPTQModel usage.
053daf8 verified about 1 month ago
  • .gitattributes
    1.52 kB
    initial commit 8 months ago
  • README.md
    2.75 kB
    Update README.md to include GPTQModel usage. about 1 month ago
  • config.json
    747 Bytes
    AutoGPTQ model for facebook/opt-125m: 4bits, gr128, desc_act=False 8 months ago
  • gptq_model-4bit-128g.safetensors
    202 MB
    LFS
    AutoGPTQ model for facebook/opt-125m: 4bits, gr128, desc_act=False 8 months ago
  • merges.txt
    456 kB
    AutoGPTQ model for facebook/opt-125m: 4bits, gr128, desc_act=False 8 months ago
  • quantize_config.json
    211 Bytes
    AutoGPTQ model for facebook/opt-125m: 4bits, gr128, desc_act=False 8 months ago
  • special_tokens_map.json
    548 Bytes
    AutoGPTQ model for facebook/opt-125m: 4bits, gr128, desc_act=False 8 months ago
  • tokenizer.json
    2.11 MB
    AutoGPTQ model for facebook/opt-125m: 4bits, gr128, desc_act=False 8 months ago
  • tokenizer_config.json
    669 Bytes
    AutoGPTQ model for facebook/opt-125m: 4bits, gr128, desc_act=False 8 months ago
  • vocab.json
    798 kB
    AutoGPTQ model for facebook/opt-125m: 4bits, gr128, desc_act=False 8 months ago