SageAttention2++: A More Efficient Implementation of SageAttention2 Paper • 2505.21136 • Published 7 days ago • 40
SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training Paper • 2505.11594 • Published 18 days ago • 69
Granite Code Models Collection A series of code models trained by IBM licensed under Apache 2.0 license. We release both the base pretrained and instruct models. • 23 items • Updated May 2 • 193