Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
eaddario
/
Llama-xLAM-2-8b-fc-r-GGUF
like
0
Text Generation
GGUF
eaddario/imatrix-calibration
English
quant
experimental
conversational
arxiv:
2406.17415
License:
cc-by-nc-4.0
Model card
Files
Files and versions
Community
Deploy
Use this model
main
Llama-xLAM-2-8b-fc-r-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
24 commits
eaddario
Update README.md
f6d239a
verified
10 days ago
imatrix
Generate imatrices
10 days ago
logits
Generate base model logits
10 days ago
scores
Add GGUF internal file structure
10 days ago
.gitattributes
Safe
1.6 kB
Update .gitattributes
10 days ago
.gitignore
Safe
6.78 kB
Add .gitignore
10 days ago
Llama-xLAM-2-8B-fc-r-F16.gguf
Safe
16.1 GB
LFS
Convert safetensor to GGUF @ F16
10 days ago
Llama-xLAM-2-8B-fc-r-IQ3_M.gguf
3.69 GB
LFS
Layer-wise quantization IQ3_M
10 days ago
Llama-xLAM-2-8B-fc-r-IQ3_S.gguf
3.43 GB
LFS
Layer-wise quantization IQ3_S
10 days ago
Llama-xLAM-2-8B-fc-r-IQ4_NL.gguf
4.39 GB
LFS
Layer-wise quantization IQ4_NL
10 days ago
Llama-xLAM-2-8B-fc-r-Q3_K_L.gguf
3.76 GB
LFS
Layer-wise quantization Q3_K_L
10 days ago
Llama-xLAM-2-8B-fc-r-Q3_K_M.gguf
3.56 GB
LFS
Layer-wise quantization Q3_K_M
10 days ago
Llama-xLAM-2-8B-fc-r-Q3_K_S.gguf
3.31 GB
LFS
Layer-wise quantization Q3_K_S
10 days ago
Llama-xLAM-2-8B-fc-r-Q4_K_M.gguf
4.41 GB
LFS
Layer-wise quantization Q4_K_M
10 days ago
Llama-xLAM-2-8B-fc-r-Q4_K_S.gguf
4.28 GB
LFS
Layer-wise quantization Q4_K_S
10 days ago
Llama-xLAM-2-8B-fc-r-Q5_K_M.gguf
5.38 GB
LFS
Layer-wise quantization Q5_K_M
10 days ago
Llama-xLAM-2-8B-fc-r-Q5_K_S.gguf
5.24 GB
LFS
Layer-wise quantization Q5_K_S
10 days ago
Llama-xLAM-2-8B-fc-r-Q6_K.gguf
6.57 GB
LFS
Layer-wise quantization Q6_K
10 days ago
Llama-xLAM-2-8B-fc-r-Q8_0.gguf
7.73 GB
LFS
Layer-wise quantization Q8_0
10 days ago
README.md
18.7 kB
Update README.md
10 days ago