brain-zhang's picture
Upload README.md with huggingface_hub
f4cb2d5 verified
metadata
license: apache-2.0
tags:
  - StepLaw
  - causal-lm
language:
  - en
library_name: transformers
pipeline_tag: text-generation
model-index:
  - name: step2v2_0618_h1024_ffnh9552_numh16_numl8_lr5.524e-03_bs128_ti30517_mlr1e-5
    results: []

Wandb Model Name: step2v2_0618_h1024_ffnh9552_numh16_numl8_lr5.524e-03_bs128_ti30517_mlr1e-5

This model is part of the StepLaw-N_268M-D_7.0B collection.

Model Specifications

Architecture

  • Hidden size (H): 1024
  • Feed-forward network size (FFN): 9552
  • Attention heads: 16
  • Layers: 8
  • Parameter count: 268M

Training Parameters

  • Learning rate (lr): 5.524e-03
  • Batch size (bs): 262144
  • Training iterations: 30517
  • Training tokens (D): 8.0B

Model Description

StepLaw models are trained with various hyperparameter settings to enable research on scaling laws and hyperparameter optimization. This specific model was trained with learning rate 5.524e-03 and batch size 262144 for 30517 iterations, using a total of 8.0B training tokens.

Usage Example

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "StepLaw/StepLaw-N_268M-D_7.0B-LR5.524e-03-BS262144"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)

# Generate text
inputs = tokenizer("A long time ago in a galaxy far, far away", return_tensors="pt")
outputs = model.generate(**inputs, max_length=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))