Jacqueline Garrahan
commited on
Note bout context
Browse files- src/about.py +2 -1
src/about.py
CHANGED
@@ -52,11 +52,12 @@ A guide for running the above tasks using EleutherAi's lm-evaluation-harness is
|
|
52 |
"""
|
53 |
|
54 |
EVALUATION_QUEUE_TEXT = """
|
55 |
-
Note: The evaluation suite is only able to run on models available via
|
56 |
|
57 |
## In case of model failure
|
58 |
If your model is displayed in the `FAILED` category, its execution stopped.
|
59 |
Check you can launch the EleutherAIHarness on your model locally using the guide avaliable on github at [https://github.com/aiera-inc/aiera-benchmark-tasks](https://github.com/aiera-inc/aiera-benchmark-tasks).
|
|
|
60 |
|
61 |
"""
|
62 |
|
|
|
52 |
"""
|
53 |
|
54 |
EVALUATION_QUEUE_TEXT = """
|
55 |
+
Note: The evaluation suite is only able to run on models available via Hugging Face's Serverless Inference API. Unfortunately, that means the models available for execution are limited, but we are working to support more models in the future.
|
56 |
|
57 |
## In case of model failure
|
58 |
If your model is displayed in the `FAILED` category, its execution stopped.
|
59 |
Check you can launch the EleutherAIHarness on your model locally using the guide avaliable on github at [https://github.com/aiera-inc/aiera-benchmark-tasks](https://github.com/aiera-inc/aiera-benchmark-tasks).
|
60 |
+
Models must be able to accomodate large context windows in order to run this evaluation.
|
61 |
|
62 |
"""
|
63 |
|