Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
ModelCloud
/
internlm-2.5-7b-gptq-4bit
like
0
Follow
ModelCloud.AI
55
Feature Extraction
Transformers
Safetensors
internlm2
internlm 2.5
gptq
4bit
gptqmodel
custom_code
4-bit precision
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
main
internlm-2.5-7b-gptq-4bit
Ctrl+K
Ctrl+K
2 contributors
History:
5 commits
Qubitium
Update README.md
d1a8857
verified
10 months ago
.gitattributes
Safe
1.52 kB
initial commit
10 months ago
README.md
Safe
513 Bytes
Update README.md
10 months ago
config.json
Safe
1.37 kB
Upload folder using huggingface_hub (#1)
10 months ago
configuration_internlm2.py
Safe
8.84 kB
Upload folder using huggingface_hub (#1)
10 months ago
model.safetensors
Safe
5.15 GB
LFS
Upload folder using huggingface_hub (#1)
10 months ago
modeling_internlm2.py
Safe
80.7 kB
Upload folder using huggingface_hub (#1)
10 months ago
quantize_config.json
Safe
340 Bytes
Upload folder using huggingface_hub (#1)
10 months ago
special_tokens_map.json
Safe
713 Bytes
Upload folder using huggingface_hub (#1)
10 months ago
tokenization_internlm2.py
Safe
8.81 kB
Upload folder using huggingface_hub (#1)
10 months ago
tokenization_internlm2_fast.py
Safe
7.81 kB
Upload folder using huggingface_hub (#1)
10 months ago
tokenizer.json
Safe
5.75 MB
Upload folder using huggingface_hub (#1)
10 months ago
tokenizer.model
Safe
1.48 MB
LFS
Upload folder using huggingface_hub (#1)
10 months ago
tokenizer_config.json
Safe
2.51 kB
Upload folder using huggingface_hub (#1)
10 months ago