NOTES: This model seems to be overtly confident leading to hallucinations, normalization has seemed to also break the long context chaining. I do not recommend this model.
Thanks to @Epiculous for the dope model/ help with llm backends and support overall.
Id like to also thank @kalomaze for the dope sampler additions to ST.
@SanjiWatsuki Thank you very much for the help, and the model!
Quants Here: Thanks to @Lewdiculus https://huggingface.co/Lewdiculous/Kunocchini-1.2-7b-longtext-GGUF-Imatrix
This model was merged using the DARE TIES.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_ties
base_model: Test157t/Kunocchini-1.1-7b
parameters:
normalize: true
models:
- model: NousResearch/Yarn-Mistral-7b-128k
parameters:
weight: 1
- model: Test157t/Kunocchini-1.1-7b
parameters:
weight: 1
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 59.57 |
AI2 Reasoning Challenge (25-Shot) | 59.90 |
HellaSwag (10-Shot) | 82.51 |
MMLU (5-Shot) | 63.05 |
TruthfulQA (0-shot) | 41.72 |
Winogrande (5-shot) | 77.35 |
GSM8k (5-shot) | 32.90 |
- Downloads last month
- 17
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Nitral-Archive/Kunocchini-1.2-7b-longtext-broken
Base model
NousResearch/Yarn-Mistral-7b-128kEvaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard59.900
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard82.510
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard63.050
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard41.720
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard77.350
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard32.900