This model is based on the LLama 7b model as a backbone, and datasets from various Orcas have been fine-tuned and merged.
The three models were combined, and the model with the best ARC and MMLU performance was given the highest weight.
First: fine-tuning beaugogh/openorca-multiplechoice-10k on llama2 7b, but using the NEFTune method.
Second: model fine-tuned with the SlimOrca dataset on llama2 7b.
Third : Model with beaugogh/openorca-multiplechoice-10k fine-tuned on llama2 7b.
We'll add the results once we have the official results
- Downloads last month
- 12
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support