SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training Paper • 2501.17161 • Published Jan 28 • 109
You Do Not Fully Utilize Transformer's Representation Capacity Paper • 2502.09245 • Published Feb 13 • 34
Forgetting Transformer: Softmax Attention with a Forget Gate Paper • 2503.02130 • Published 17 days ago • 27