sentence-transformers
Safetensors
new
custom_code

Trendyol/TY-ecomm-embed-multilingual-base-v1.2.0

Trendyol/TY-ecomm-embed-multilingual-base-v1.2.0 is a multilingual sentence-transformers embedding model fine-tuned on e-commerce datasets, optimized for semantic similarity, search, classification, and retrieval tasks. It is integrating domain-specific signals from millions of real-world queries, product descriptions, and user interactions. This model is fine-tuned over a distilled version of Alibaba-NLP/gte-multilingual-base using Turkish-English pair translation dataset.

Keynotes:

  • Optimized for e-commerce semantic search
  • Enhanced Turkish and multilingual query understanding
  • Supports query rephrasing and paraphrase mining
  • Robust for product tagging and attribute extraction
  • Suitable for clustering and product categorization
  • High-performance in semantic textual similarity
  • 384-token input support
  • 768-dimensional dense vector outputs
  • Built-in cosine similarity for inference

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Matryoshka Dimensions: 768, 512, 128
  • Similarity Function: Cosine Similarity
  • Training Datasets:
    • Multilingual and Turkish search terms
    • Turkish instruction datasets
    • Turkish summarization datasets
    • Turkish e-commerce rephrase datasets
    • Turkish question-answer pairs
    • and more!

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
matryoshka_dim = 768
model = SentenceTransformer("Trendyol/TY-ecomm-embed-multilingual-base-v1.2.0", trust_remote_code=True, truncate_dim=matryoshka_dim)
# Run inference
sentences = [
    '120x190 yapıyor musunuz',
    'merhaba 120 x 180 mevcüttür',
    'Ürün stoklarımızda bulunmamaktadır',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Bias, Risks and Limitations

While this model is trained on e-commerce-related datasets, including multilingual and Turkish data, users should be aware of several limitations:

  • Domain bias: Performance may degrade for content outside the e-commerce or product-related domains, such as legal, medical, or highly technical texts.

  • Language coverage: Although multilingual data was included, majority of the dataset is created in Turkish.

  • Input length limitations: Inputs exceeding the maximum sequence length (384 tokens) will be truncated, potentially losing critical context in long texts.

  • Spurious similarity: Semantic similarity may incorrectly assign high similarity scores to unrelated but lexically similar or frequently co-occurring phrases in training data.

Recommendations

  • Human Oversight: We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly.
  • Application-Specific Testing: Developers intending to use Trendyol embedding models should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s outputs may occasionally be biased or inaccurate.
  • Responsible Development and Deployment: It is the responsibility of developers and users of Trendyol embedding models to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences.

Training Details

  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "CachedMultipleNegativesSymmetricRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            128
        ],
        "matryoshka_weights": [
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • overwrite_output_dir: True
  • eval_strategy: steps
  • per_device_train_batch_size: 2048
  • per_device_eval_batch_size: 128
  • learning_rate: 0.0005
  • num_train_epochs: 1
  • warmup_ratio: 0.01
  • fp16: True
  • ddp_timeout: 300000
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: True
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 2048
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 0.0005
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.01
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 300000
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.5.1
  • Datasets: 2.21.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
460
Safetensors
Model size
305M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Trendyol/TY-ecomm-embed-multilingual-base-v1.2.0

Finetuned
(58)
this model

Collection including Trendyol/TY-ecomm-embed-multilingual-base-v1.2.0