SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tjohn327/scion-all-MiniLM-L6-v2")
# Run inference
sentences = [
"How does SCION's time synchronization support duplicate detection and traffic monitoring?",
'The document chunk details the COLIBRI system for bandwidth reservations within the SCION architecture, emphasizing its efficiency in managing end-to-end reservations (EERs) across on-path Autonomous Systems (ASes). COLIBRI employs symmetric cryptography for stateless verification of reservations, mitigating the need for per-source state. To counter replay attacks, a duplicate suppression mechanism is essential, ensuring that authenticated packets cannot be maliciously reused to exceed allocated bandwidth. Monitoring and policing systems are necessary to enforce bandwidth limits, allowing ASes to identify and manage misbehaving flows. While stateful flow monitoring, such as the token-bucket algorithm, provides precise measurements, it is limited to edge deployments due to state requirements. In contrast, probabilistic monitoring techniques like LOFT can operate effectively within the Internet core, accommodating a vast number of flows. Time synchronization is critical for coordinating bandwidth reservations across AS boundaries, facilitating accurate duplicate detection and traffic monitoring. SCION’s time synchronization mechanism ensures adequate synchronization levels for these operations. Overall, COLIBRI enables efficient short-term bandwidth reservations, crucial for maintaining the integrity of end-to-end communications in a highly dynamic Internet environment.',
"SCION is a next-generation Internet architecture designed to enhance security, availability, isolation, and scalability. It enables end hosts to utilize multiple authenticated inter-domain paths to any destination, with each packet carrying its own specified forwarding path determined by the sender. SCION architecture incorporates isolation domains (ISDs), which consist of multiple Autonomous Systems (ASes) that agree on a trust root configuration (TRC) defining the roots of trust for validating bindings between names and public keys or addresses. Core ASes govern each ISD, providing inter-ISD connectivity and managing trust roots. The architecture delineates three types of AS relationships: core, provider-customer, and peer-peer, with core relations existing solely among core ASes. SCION addressing is structured as a tuple ⟨ISD number, AS number, local address⟩, where the ISD number identifies the ISD of the end host, the AS number identifies the host's AS, and the local address serves as the host's identifier within its AS, not utilized for inter-domain routing or forwarding. Path-construction, path-registration, and path-lookup procedures are integral to SCION's operational framework, facilitating efficient packet forwarding based on the specified paths.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
dev-evaluation
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.2033 |
cosine_accuracy@3 | 0.4409 |
cosine_accuracy@5 | 0.5369 |
cosine_accuracy@10 | 0.6626 |
cosine_precision@1 | 0.2033 |
cosine_precision@3 | 0.147 |
cosine_precision@5 | 0.1074 |
cosine_precision@10 | 0.0663 |
cosine_recall@1 | 0.2033 |
cosine_recall@3 | 0.4409 |
cosine_recall@5 | 0.5369 |
cosine_recall@10 | 0.6626 |
cosine_ndcg@10 | 0.4209 |
cosine_mrr@10 | 0.3449 |
cosine_map@100 | 0.3569 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 21,376 training samples
- Columns:
sentence_0
,sentence_1
, andlabel
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 7 tokens
- mean: 17.45 tokens
- max: 30 tokens
- min: 4 tokens
- mean: 231.74 tokens
- max: 256 tokens
- min: 1.0
- mean: 1.0
- max: 1.0
- Samples:
sentence_0 sentence_1 label What are the advantages of using flow-volume targets over cash transfers in SCION agreements?
The document discusses optimization strategies for interconnection agreements in SCION, focusing on flow-volume targets and cash compensation mechanisms. The optimization problem involves determining the maximum customer demand for new path segments, denoted as ∆fmax P, and adjusting flow allowances f(a) and additional traffic ∆f(a) accordingly. Flow-volume targets provide predictability and enforceable limits, enhancing the likelihood of positive utility outcomes. Conversely, cash compensation agreements, defined by a payment π between Autonomous Systems (ASes), offer flexibility and can be concluded even when flow-volume targets yield zero solutions. The Nash Bargaining Solution is employed to determine the cash transfer that balances utilities uD(a) and uE(a) of the negotiating parties. Both methods face challenges due to private information regarding costs and pricing, which can lead to inefficiencies in negotiations. The document introduces a mechanism-assisted negotiation approac...
1.0
What is the significance of the variable 'auth⇄a' in packet forwarding?
The document chunk presents a formal model for secure packet forwarding protocols within the SCION architecture, detailing the dispatch and handling of packets across internal and external channels. The functions
dispatch-inta
,dispatch-intc
,dispatch-exta
, anddispatch-extc
manage the addition of packets to internal and external send queues based on their authorization status and historical context. The model defines packet structures (PKTa
) that encapsulate the future path (fut
), past path (past
), and historical path (hist
), with authorization checks against the set of authorized paths (autha
) and their fragment closures (auth⇄a
). The environment parameters include the set of compromised nodes (Nattr
), which influences the security model by introducing potential adversarial actions. The state representation employs asynchronous channels for packet transmission, with the initial state having all channels empty. The model emphasizes the importance of maintaining i...1.0
How can SNC and COLIBRI SegRs be combined to enhance DRKey bootstrapping security?
The document chunk discusses the Protected DRKey Bootstrapping mechanism within the SCION architecture, emphasizing the integration of Service Network Controllers (SNC) and COLIBRI Segments (SegRs) to enhance the security of DRKey bootstrapping. This approach mitigates the denial-of-DRKey attack surface by ensuring that pre-shared DRKeys are securely established. The adversary model considers off-path adversaries capable of modifying, dropping, or injecting packets, while maintaining that the reservation path remains uncontested by adversaries. The document highlights the necessity of path stability in the underlying network architecture for effective bandwidth-reservation protocols. Various queuing disciplines, such as distributed weighted fair queuing (DWFQ) and rate-controlled priority queuing (PQ), are discussed for traffic isolation, with specific mechanisms to prevent starvation of lower-priority traffic. Control-plane traffic is rate-limited at infrastructure services and border...
1.0
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 128per_device_eval_batch_size
: 128num_train_epochs
: 2fp16
: Truemulti_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 128per_device_eval_batch_size
: 128per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 2max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | dev-evaluation_cosine_ndcg@10 |
---|---|---|
1.0 | 84 | 0.4114 |
2.0 | 168 | 0.4209 |
Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for tjohn327/scion-all-MiniLM-L6-v2
Base model
sentence-transformers/all-MiniLM-L6-v2Evaluation results
- Cosine Accuracy@1 on dev evaluationself-reported0.203
- Cosine Accuracy@3 on dev evaluationself-reported0.441
- Cosine Accuracy@5 on dev evaluationself-reported0.537
- Cosine Accuracy@10 on dev evaluationself-reported0.663
- Cosine Precision@1 on dev evaluationself-reported0.203
- Cosine Precision@3 on dev evaluationself-reported0.147
- Cosine Precision@5 on dev evaluationself-reported0.107
- Cosine Precision@10 on dev evaluationself-reported0.066
- Cosine Recall@1 on dev evaluationself-reported0.203
- Cosine Recall@3 on dev evaluationself-reported0.441