--- base_model: - ReadyArt/The-Omega-Directive-L-70B-v1.0 - Tarek07/Legion-V2.1-LLaMa-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [ReadyArt/The-Omega-Directive-L-70B-v1.0](https://huggingface.co/ReadyArt/The-Omega-Directive-L-70B-v1.0) as a base. ### Models Merged The following models were included in the merge: * [Tarek07/Legion-V2.1-LLaMa-70B](https://huggingface.co/Tarek07/Legion-V2.1-LLaMa-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties base_model: ReadyArt/The-Omega-Directive-L-70B-v1.0 models: - model: ReadyArt/The-Omega-Directive-L-70B-v1.0 parameters: weight: 0.6 - model: Tarek07/Legion-V2.1-LLaMa-70B parameters: weight: 0.4 parameters: density: 0.3 tokenizer: source: union chat_template: auto ```