--- base_model: - maywell/Qwen2-7B-Multilingual-RP - thirdeyeai/marco-o1-uncensored - HumanLLMs/Human-Like-Qwen2.5-7B-Instruct - Orion-zhen/Qwen2.5-7B-Instruct-Uncensored library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della merge method using [thirdeyeai/marco-o1-uncensored](https://huggingface.co/thirdeyeai/marco-o1-uncensored) as a base. ### Models Merged The following models were included in the merge: * [maywell/Qwen2-7B-Multilingual-RP](https://huggingface.co/maywell/Qwen2-7B-Multilingual-RP) * [HumanLLMs/Human-Like-Qwen2.5-7B-Instruct](https://huggingface.co/HumanLLMs/Human-Like-Qwen2.5-7B-Instruct) * [Orion-zhen/Qwen2.5-7B-Instruct-Uncensored](https://huggingface.co/Orion-zhen/Qwen2.5-7B-Instruct-Uncensored) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Orion-zhen/Qwen2.5-7B-Instruct-Uncensored parameters: density: 0.5 weight: 0.5 - model: HumanLLMs/Human-Like-Qwen2.5-7B-Instruct parameters: density: 0.5 weight: 0.5 - model: maywell/Qwen2-7B-Multilingual-RP parameters: density: 0.5 weight: 0.5 merge_method: della base_model: thirdeyeai/marco-o1-uncensored parameters: normalize: false int8_mask: true dtype: float16 ```