SageAttention2++: A More Efficient Implementation of SageAttention2
Abstract
SageAttention2++ improves attention efficiency by using FP8 Matmul in FP16, achieving a 3.9x speedup over FlashAttention without losing accuracy.
The efficiency of attention is critical because its time complexity grows quadratically with sequence length. SageAttention2 addresses this by utilizing quantization to accelerate matrix multiplications (Matmul) in attention. To further accelerate SageAttention2, we propose to utilize the faster instruction of FP8 Matmul accumulated in FP16. The instruction is 2x faster than the FP8 Matmul used in SageAttention2. Our experiments show that SageAttention2++ achieves a 3.9x speedup over FlashAttention while maintaining the same attention accuracy as SageAttention2. This means SageAttention2++ effectively accelerates various models, including those for language, image, and video generation, with negligible end-to-end metrics loss. The code will be available at https://github.com/thu-ml/SageAttention.
Community
SageAttention2++ achieves a 3.9x speedup over FlashAttention while maintaining the same attention accuracy as SageAttention2. The code will be available at https://github.com/thu-ml/SageAttention.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention (2025)
- Sparse-to-Dense: A Free Lunch for Lossless Acceleration of Video Understanding in LLMs (2025)
- FireQ: Fast INT4-FP8 Kernel and RoPE-aware Quantization for LLM Inference Acceleration (2025)
- Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models (2025)
- Quantization Error Propagation: Revisiting Layer-Wise Post-Training Quantization (2025)
- Streamline Without Sacrifice -- Squeeze out Computation Redundancy in LMM (2025)
- VORTA: Efficient Video Diffusion via Routing Sparse Attention (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper