RichardErkhov's picture
uploaded readme
a89aeab verified

Quantization made by Richard Erkhov.

Github

Discord

Request more models

crow-1b-attempt1 - bnb 4bits

Original model description:

license: apache-2.0 datasets: - euclaise/SuperMC - euclaise/prm800k_preferences model-index: - name: crow-1b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 25.51 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 25.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 24.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 48.28 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 49.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.83 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=euclaise/crow-1b name: Open LLM Leaderboard

Expirements in large-scale small-scale preference learning.

This one was a failure, it benchmarks horribly, despite responding okay to trivia questions in testing

falcon-rw-1b trained with PRO (preference ranking optimization, see https://arxiv.org/abs/2306.17492) on SuperMC and PRM800K (only stage 1) for 3 epochs, using my supertrainer2000 framework.

This is an expiremental model.

Benchmarks coming soon.

Hyperparameters:

  • AdamW, weight decay of 0.01, otherwise default hyperparams
  • Maximum LR of 1e-5
  • Cosine schedule with a warmup of 5400 steps
  • Batch size of 4 (2 real x 2 accumulated)
  • Maximum of 5 epochs, early stopping (visual observation), stopped after 3
  • Gradient clipping norm value of 1.0
  • PRO beta of 4

Training prompt format:

### Query
[insert instruction here]

### Answer
[insert response here]

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 29.12
AI2 Reasoning Challenge (25-Shot) 25.51
HellaSwag (10-Shot) 25.87
MMLU (5-Shot) 24.80
TruthfulQA (0-shot) 48.28
Winogrande (5-shot) 49.41
GSM8k (5-shot) 0.83