Text Generation
Transformers
Safetensors
English
rwkv7
custom_code
rwkv7-421M-pile / README.md
ZhangRC's picture
Improve model card: add paper link, project page link, and library name (#1)
9faeb25 verified
metadata
base_model:
  - BlinkDL/rwkv-7-pile
datasets:
  - EleutherAI/the_pile_deduplicated
language:
  - en
license: apache-2.0
metrics:
  - accuracy
pipeline_tag: text-generation
library_name: transformers

rwkv7-421M-pile

This is RWKV-7 model under flash-linear attention format.

Model Details

Model Description

  • Developed by: Bo Peng, Yu Zhang, Songlin Yang, Ruichong Zhang
  • Funded by: RWKV Project (Under LF AI & Data Foundation)
  • Model type: RWKV7
  • Language(s) (NLP): English
  • License: Apache-2.0
  • Parameter count: 421M
  • Tokenizer: GPT-NeoX 20B tokenizer

Model Sources

Uses

Install flash-linear-attention and the latest version of transformers before using this model:

pip install git+https://github.com/fla-org/flash-linear-attention
pip install 'transformers>=4.48.0'

Direct Use

You can use this model just as any other HuggingFace models:

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained('fla-hub/rwkv7-421M-pile', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('fla-hub/rwkv7-421M-pile', trust_remote_code=True)

Training Details

Training Data

This model is trained on the Pile with a total of 332 billion tokens.

Training Hyperparameters

  • Training regime: bfloat16, lr 8e-4 to 3e-5 cosine decay, wd 0.1, bsz 8x30x4096

Evaluation

Metrics

lambada_openai: ppl 7.21 acc 57.9%

piqa: acc 69.2%

FAQ

Q: safetensors metadata is none.

A: upgrade transformers to >=4.48.0: pip install 'transformers>=4.48.0'