This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method using Qwen/Qwen2-VL-2B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: prithivMLmods/Blazer.1-2B-Vision
  - model: prithivMLmods/Caption-Pro
  - model: prithivMLmods/ChemQwen2-vL
  - model: prithivMLmods/JSONify-Flux
  - model: prithivMLmods/LatexMind-2B-Codec
  - model: prithivMLmods/Omni-Reasoner-2B
  - model: prithivMLmods/QvQ-Step-Tiny
  - model: prithivMLmods/Qwen2-VL-OCR2-2B-Instruct
  - model: prithivMLmods/Qwen2-VL-OCR-2B-Instruct
  - model: prithivMLmods/Radiology-Infer-Mini
  - model: Qwen/Qwen2-VL-2B-Instruct
  - model: Qwen/Qwen2-VL-2B
merge_method: linear
base_model: Qwen/Qwen2-VL-2B-Instruct
parameters:
  weight: 0.5
  normalize: true
  int8_mask: true
dtype: bfloat16
Downloads last month
17
Safetensors
Model size
2.21B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Lunzima/NQLSG-Qwen2-VL-2B-v2-Base