Text Generation
Transformers
nielsr HF Staff commited on
Commit
93a4df3
·
verified ·
1 Parent(s): 1fbecde

Add model card

Browse files

This PR adds a model card for the paper [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).

It adds the Apache 2.0 license, the Transformers library, the text-generation pipeline tag, a link to the paper, and a link to the code repository.

Please review and merge this PR if everything looks good.

Files changed (1) hide show
  1. README.md +9 -0
README.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ ---
6
+
7
+ This repository contains the RADLADS models as presented in the paper [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).
8
+
9
+ More information can be found at the Github repository: https://github.com/recursal/RADLADS-paper