-
SageAttention2 Technical Report: Accurate 4 Bit Attention for Plug-and-play Inference Acceleration
Paper • 2411.10958 • Published • 56 -
SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference
Paper • 2502.18137 • Published • 57 -
SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training
Paper • 2505.11594 • Published • 67 -
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration
Paper • 2410.02367 • Published • 51
Jintao Zhang
jt-zhang
AI & ML interests
Efficient ML
Recent Activity
authored
a paper
4 days ago
SageAttention2++: A More Efficient Implementation of SageAttention2
updated
a collection
4 days ago
efficient ml
upvoted
a
paper
4 days ago
SageAttention2++: A More Efficient Implementation of SageAttention2
Organizations
None yet
Collections
1
models
0
None public yet
datasets
0
None public yet