BGE-m3 ZH mMARCO/v2 Native Queries

This is a BGE-M3 model post-trained on the Chinese dataset from MMARCO/v2.

The model was used for the SIGIR 2025 Short paper: Lost in Transliteration: Bridging the Script Gap in Neural IR.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 tokens
  • Similarity Function: Cosine Similarity

Training Details

Framework Versions

  • Python: 3.10.13
  • Sentence Transformers: 3.1.1
  • Transformers: 4.45.1
  • PyTorch: 2.4.1
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.20.3
Downloads last month
8
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for andreaschari/bge-m3-ZH_MMARCO_NATIVE

Base model

BAAI/bge-m3
Finetuned
(260)
this model

Dataset used to train andreaschari/bge-m3-ZH_MMARCO_NATIVE

Collection including andreaschari/bge-m3-ZH_MMARCO_NATIVE