Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Intel
/
gpt-neox-japanese-2.7b-int8-inc
like
0
Follow
Intel
2.51k
Text Generation
oscar
Japanese
japanese
gpt_neox
gpt
lm
nlp
int8
neural-compressor
Intel® Neural Compressor
PostTrainingStatic
Eval Results
License:
mit
Model card
Files
Files and versions
Community
main
gpt-neox-japanese-2.7b-int8-inc
Ctrl+K
Ctrl+K
2 contributors
History:
4 commits
echarlaix
HF Staff
update loading instructions
1e61ddd
about 1 year ago
.gitattributes
Safe
1.48 kB
initial commit
about 2 years ago
README.md
Safe
1.34 kB
update loading instructions
about 1 year ago
best_model.pt
pickle
Detected Pickle imports (11)
"torch.LongStorage"
,
"torch.per_channel_affine"
,
"collections.OrderedDict"
,
"torch.QInt8Storage"
,
"torch._utils._rebuild_tensor_v2"
,
"torch._utils._rebuild_qtensor"
,
"torch.FloatStorage"
,
"torch.quint8"
,
"torch.QUInt8Storage"
,
"torch.qint8"
,
"torch.DoubleStorage"
How to fix it?
2.69 GB
LFS
add int8 model
almost 2 years ago