File size: 232 Bytes
9df1543 |
1 2 3 4 5 6 |
---
base_model:
- CohereForAI/c4ai-command-a-03-2025
---
This is a W8A8-FP8 quant created using [llm-compressor](https://github.com/vllm-project/llm-compressor) which can be loaded with [vllm](https://github.com/vllm-project/vllm). |