SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tjohn327/scion-minilm-l6-v3")
# Run inference
sentences = [
    'How many active ASes are reported as of the CIDR report mentioned in the document?',
    '<title>Pervasive Internet-Wide Low-Latency Authentication</title>\n<section>C. AS as Opportunistically Trusted Entity</section>\n<content>\nEach entity in the Internet is part of at least one AS, which is under the control of a single administrative entity. This facilitates providing a common service that authenticates endpoints (e.g., using a challenge-response protocol or preinstalled keys and certificates) and issues certificates. Another advantage is the typically close relationship between an endpoint and its AS, which allows for a stronger leverage in case of misbehavior. Since it is infeasible for an endpoint to authenticate each AS by itself (there are ∼71 000 active ASes according to the CIDR report  [4] ), RPKI is used as a trust anchor to authenticate ASes. RPKI resource issuers assign an AS a set of IP address prefixes that this AS is allowed to originate. An AS then issues short-lived certificates for its authorized IP address ranges.\n</content>',
    '<title>Unknown Title</title>\n<section>\uf735.\uf731 Paths emission per unit of traffic</section>\n<content>\nThe reason is that the number of BGP paths is less than \uf735 for most AS pairs. This figure also suggests that the \uf735-greenest paths average emission differs from the greenest path emission and the n-greenest paths average emission for both beaconing algorithms. However, for every percentile, this difference in SCI-GIB is about \uf733 times less than the one in SCI-BCE. This means that the \uf735-greenest paths average emission in SCI-GIB is much closer to the greenest path emission than SCI-BCE. Also, for every percentile, the difference between the \uf735-greenest paths average emissions of the two different beaconing algorithms is \uf732 times more than the difference between their greenest path emissions. From both of these observations, we conclude that SCI-GIB is better at finding the greenest set of paths\n</content>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.6294
cosine_accuracy@3 0.8216
cosine_accuracy@5 0.8764
cosine_accuracy@10 0.9309
cosine_precision@1 0.6294
cosine_precision@3 0.2739
cosine_precision@5 0.1755
cosine_precision@10 0.0933
cosine_recall@1 0.6292
cosine_recall@3 0.821
cosine_recall@5 0.8759
cosine_recall@10 0.9306
cosine_ndcg@10 0.7828
cosine_mrr@10 0.7351
cosine_map@100 0.7379

Training Details

Training Dataset

Unnamed Dataset

  • Size: 46,618 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 2 tokens
    • mean: 21.15 tokens
    • max: 45 tokens
    • min: 86 tokens
    • mean: 200.21 tokens
    • max: 256 tokens
  • Samples:
    sentence_0 sentence_1
    What specific snippet of the resolver-recv-answer-for-client rule is presented in the document? A Formal Framework for End-to-End DNS Resolution
    3.2.3 DNS Dynamics.


    This rule [resolver-recv-answer-for-client] has 74 LOC with nontrivial auxiliary functions and rule conditions. For the simplicity of our presentation, we only show the most important snippet with respect to positive caching. 5 The rule applies for a response that authoritatively answers a client query. More specifically, a temporary cache is created from the data contained in the response (line 8), which is then used for the lookup (line 10). Note that we cannot perform the lookup directly on the actual cache as case A of the resolver algorithm should only consider the data in the response, not in the cache. Also note that we look only at the data in the answer section (ANS, line 2) for the temporary positive cache as the entire rule is concerned with authoritative answers. Finally, we insert the data from the response into the actual cache and use this updated cache on th...
    What is the relationship between early adopters and the potential security improvements mentioned for SBAS in the document? Creating a Secure Underlay for the Internet
    9 Related Work


    . While several challenges still exist when deploying SBAS in a production setting, our survey shows a potential path forward and our experimental results show promise that sizable security improvements can be achieved with even a small set of early adopters. We hope that SBAS revitalizes the quest for secure inter-domain routing.
    How does the evaluation in this study focus on user-driven path control within SCION? Evaluation of SCION for User-driven Path Control: a Usability Study
    ABSTRACT


    The UPIN (User-driven Path verification and control in Inter-domain Networks) project aims to implement a way for users of a network to control how their data is traversing it. In this paper we investigate the possibilities and limitations of SCION for user-driven path control. Exploring several aspects of the performance of a SCION network allows us to define the most efficient path to assign to a user, following specific requests. We extensively analyze multiple paths, specifically focusing on latency, bandwidth and data loss, in SCIONLab, an experimental testbed and implementation of a SCION network. We gather data on these paths and store it in a database, that we then query to select the best path to give to a user to reach a destination, following their request on performance or devices to exclude for geographical or sovereignty reasons. Results indicate our so...
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss val-ir-eval_cosine_ndcg@10
0.1372 100 - 0.6950
0.2743 200 - 0.7313
0.4115 300 - 0.7443
0.5487 400 - 0.7573
0.6859 500 0.3862 0.7576
0.8230 600 - 0.7627
0.9602 700 - 0.7662
1.0 729 - 0.7709
1.0974 800 - 0.7705
1.2346 900 - 0.7718
1.3717 1000 0.2356 0.7747
1.5089 1100 - 0.7742
1.6461 1200 - 0.7759
1.7833 1300 - 0.7776
1.9204 1400 - 0.7807
2.0 1458 - 0.7815
2.0576 1500 0.1937 0.7789
2.1948 1600 - 0.7814
2.3320 1700 - 0.7819
2.4691 1800 - 0.7823
2.6063 1900 - 0.7827
2.7435 2000 0.1758 0.7828

Framework Versions

  • Python: 3.12.3
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.4.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
27
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for tjohn327/scion-minilm-l6-v3

Finetuned
(263)
this model

Evaluation results