Jacqueline Garrahan
commited on
Fix the guide
Browse files- src/about.py +1 -26
src/about.py
CHANGED
@@ -54,34 +54,9 @@ A guide for running the above tasks using EleutherAi's lm-evaluation-harness is
|
|
54 |
EVALUATION_QUEUE_TEXT = """
|
55 |
Note: The evaluation suite is only able to run on models available via the Hugging Face Serverless Inference API.
|
56 |
|
57 |
-
## Some good practices before submitting a model
|
58 |
-
|
59 |
-
### 1) Make sure you can load your model and tokenizer using AutoClasses:
|
60 |
-
```python
|
61 |
-
from transformers import AutoConfig, AutoModel, AutoTokenizer
|
62 |
-
config = AutoConfig.from_pretrained("your model name", revision="main")
|
63 |
-
model = AutoModel.from_pretrained("your model name", revision="main")
|
64 |
-
tokenizer = AutoTokenizer.from_pretrained("your model name", revision="main")
|
65 |
-
```
|
66 |
-
If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded.
|
67 |
-
|
68 |
-
Note: make sure your model is public!
|
69 |
-
Note: if your model needs `use_remote_code=True`, we do not support this option yet but we are working on adding it, stay posted!
|
70 |
-
Note: The evals in this repo require large context windows.
|
71 |
-
|
72 |
-
### 2) Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index)
|
73 |
-
It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`!
|
74 |
-
|
75 |
-
### 3) Make sure your model has an open license!
|
76 |
-
This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model 🤗
|
77 |
-
|
78 |
-
### 4) Fill up your model card
|
79 |
-
When we add extra information about models to the leaderboard, it will be automatically taken from the model card
|
80 |
-
|
81 |
## In case of model failure
|
82 |
If your model is displayed in the `FAILED` category, its execution stopped.
|
83 |
-
|
84 |
-
If everything is done, check you can launch the EleutherAIHarness on your model locally, using the above command without modifications (you can add `--limit` to limit the number of examples per task). A guide for running the Aiera's tasks using EleutherAi's lm-evaluation-harness is avaliable on github at [https://github.com/aiera-inc/aiera-benchmark-tasks](https://github.com/aiera-inc/aiera-benchmark-tasks).
|
85 |
|
86 |
"""
|
87 |
|
|
|
54 |
EVALUATION_QUEUE_TEXT = """
|
55 |
Note: The evaluation suite is only able to run on models available via the Hugging Face Serverless Inference API.
|
56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
57 |
## In case of model failure
|
58 |
If your model is displayed in the `FAILED` category, its execution stopped.
|
59 |
+
Check you can launch the EleutherAIHarness on your model locally using the guide avaliable on github at [https://github.com/aiera-inc/aiera-benchmark-tasks](https://github.com/aiera-inc/aiera-benchmark-tasks).
|
|
|
60 |
|
61 |
"""
|
62 |
|