metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:178
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-reranker-base
widget:
- source_sentence: >-
What is the basis for the company's calculation of deferred tax assets and
liabilities?
sentences:
- ' We could be subject to various penalties or restrictions on our ability to conduct our business, which could have a material and adverse impact on our business, operating results, and financial condition.'
- ' The company''s calculation of deferred tax assets and liabilities is based on certain estimates and judgments and involves dealing with uncertainties in the application of complex tax laws.'
- ' March 4, 2024.'
- source_sentence: >-
What are some situations that may result in excess or obsolete inventory
or excess product purchase commitments?
sentences:
- ' 20%'
- ' NVIDIA''s common stock is traded on the Nasdaq Global Select Market under the symbol NVDA.'
- ' Changes in business and economic conditions, changes in market conditions, sudden and significant decreases in demand for products, inventory obsolescence due to changing technology and customer requirements, new product introductions, failure to estimate customer demand properly, ordering in advance of historical lead-times, government regulations, and changes in future demand or increase in demand for competitive products.'
- source_sentence: >-
What are the primary methods used by the company to protect its
intellectual property?
sentences:
- ' The change resulted in an increase in operating income of $135 million and net income of $114 million after tax, or $0.05 per both basic and diluted share.'
- ' Forfeitures are estimated based on historical experience and revised semi-annually if actual forfeitures differ from those estimates.'
- ' The company relies primarily on a combination of patents, trademarks, trade secrets, employee and third-party nondisclosure agreements, and licensing arrangements to protect its intellectual property in the United States and internationally.'
- source_sentence: >-
What are the potential consequences of an unfavorable outcome in the
litigation and regulatory proceedings mentioned in the text?
sentences:
- ' The new licensing requirements apply to exports of certain products, including A100, A800, H100, H800, L4, L40, L40S, and RTX 4090, exceeding certain performance thresholds to China, Country Groups D1, D4, and D5, and to parties headquartered in or with an ultimate parent headquartered in Country Group D5, including China.'
- ' Adverse rulings could occur, including monetary damages or fines, an injunction stopping the company from manufacturing or selling certain products, engaging in certain business practices, or requiring other remedies, such as compulsory licensing of patents.'
- ' The main components of the NVIDIA accelerated computing platform include GPUs, DPUs, interconnects, and fully optimized AI and high-performance computing software stacks.'
- source_sentence: >-
What are the potential risks associated with the company's acquisitions
and strategic investments?
sentences:
- ' $7,280.'
- ' The company could face significant consequences, including government enforcement actions, litigation, additional reporting requirements and/or oversight, bans on processing personal data, and orders to destroy or not use personal data, which could have a material adverse effect on its reputation, business, or financial condition.'
- ' The potential risks include impairment of the company''s ability to grow its business, develop new products, or sell its products, as well as the possibility of regulatory conditions reducing the value of the acquisition, volatility in results, losses up to the value of the investment, and impairment losses due to the failure of the invested companies.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
model-index:
- name: SentenceTransformer based on BAAI/bge-reranker-base
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: bge base en
type: bge-base-en
metrics:
- type: cosine_accuracy@1
value: 0.0056179775280898875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.05056179775280899
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.0898876404494382
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.16853932584269662
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.0056179775280898875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.016853932584269662
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.017977528089887642
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.016853932584269662
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.0056179775280898875
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.05056179775280899
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.0898876404494382
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.16853932584269662
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.07282854323415827
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.044208132691278754
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.07494028485986497
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.016853932584269662
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.033707865168539325
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.08426966292134831
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.1853932584269663
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.016853932584269662
name: Dot Precision@1
- type: dot_precision@3
value: 0.011235955056179775
name: Dot Precision@3
- type: dot_precision@5
value: 0.016853932584269662
name: Dot Precision@5
- type: dot_precision@10
value: 0.018539325842696634
name: Dot Precision@10
- type: dot_recall@1
value: 0.016853932584269662
name: Dot Recall@1
- type: dot_recall@3
value: 0.033707865168539325
name: Dot Recall@3
- type: dot_recall@5
value: 0.08426966292134831
name: Dot Recall@5
- type: dot_recall@10
value: 0.1853932584269663
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.07894273048552719
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.04782637774210808
name: Dot Mrr@10
- type: dot_map@100
value: 0.07569131465859923
name: Dot Map@100
SentenceTransformer based on BAAI/bge-reranker-base
This is a sentence-transformers model finetuned from BAAI/bge-reranker-base on the train dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-reranker-base
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Training Dataset:
- train
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rezarahim/bge-finetuned-reranker")
# Run inference
sentences = [
"What are the potential risks associated with the company's acquisitions and strategic investments?",
" The potential risks include impairment of the company's ability to grow its business, develop new products, or sell its products, as well as the possibility of regulatory conditions reducing the value of the acquisition, volatility in results, losses up to the value of the investment, and impairment losses due to the failure of the invested companies.",
' $7,280.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
bge-base-en
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.0056 |
cosine_accuracy@3 | 0.0506 |
cosine_accuracy@5 | 0.0899 |
cosine_accuracy@10 | 0.1685 |
cosine_precision@1 | 0.0056 |
cosine_precision@3 | 0.0169 |
cosine_precision@5 | 0.018 |
cosine_precision@10 | 0.0169 |
cosine_recall@1 | 0.0056 |
cosine_recall@3 | 0.0506 |
cosine_recall@5 | 0.0899 |
cosine_recall@10 | 0.1685 |
cosine_ndcg@10 | 0.0728 |
cosine_mrr@10 | 0.0442 |
cosine_map@100 | 0.0749 |
dot_accuracy@1 | 0.0169 |
dot_accuracy@3 | 0.0337 |
dot_accuracy@5 | 0.0843 |
dot_accuracy@10 | 0.1854 |
dot_precision@1 | 0.0169 |
dot_precision@3 | 0.0112 |
dot_precision@5 | 0.0169 |
dot_precision@10 | 0.0185 |
dot_recall@1 | 0.0169 |
dot_recall@3 | 0.0337 |
dot_recall@5 | 0.0843 |
dot_recall@10 | 0.1854 |
dot_ndcg@10 | 0.0789 |
dot_mrr@10 | 0.0478 |
dot_map@100 | 0.0757 |
Training Details
Training Dataset
train
- Dataset: train
- Size: 178 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 178 samples:
anchor positive type string string details - min: 12 tokens
- mean: 23.55 tokens
- max: 50 tokens
- min: 3 tokens
- mean: 42.22 tokens
- max: 135 tokens
- Samples:
anchor positive What is the publication date of the NVIDIA Corporation Annual Report 2024?
February 21st, 2024
What is the filing date of the 10-K report for NVIDIA Corporation in 2004?
The filing dates of the 10-K reports for NVIDIA Corporation in 2004 are May 20th, March 29th, and April 25th.
What is the purpose of the section of the filing that requires the registrant to indicate whether it has submitted electronically every Interactive Data File required to be submitted during the preceding 12 months?
The purpose of this section is to comply with Rule 405 of Regulation S-T, which requires the registrant to submit electronic files for certain financial information.
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 4per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 25lr_scheduler_type
: cosinewarmup_ratio
: 0.1load_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 4per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 25max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | bge-base-en_dot_map@100 |
---|---|---|---|
0 | 0 | - | 0.0362 |
0.7111 | 2 | - | 0.0369 |
1.7778 | 5 | - | 0.0539 |
2.8444 | 8 | - | 0.0393 |
3.5556 | 10 | 2.0824 | - |
3.9111 | 11 | - | 0.0559 |
4.9778 | 14 | - | 0.0632 |
5.6889 | 16 | - | 0.08 |
6.7556 | 19 | - | 0.0692 |
7.1111 | 20 | 1.2812 | - |
7.8222 | 22 | - | 0.0627 |
8.8889 | 25 | - | 0.0623 |
9.9556 | 28 | - | 0.0692 |
10.6667 | 30 | 1.0855 | 0.0884 |
11.7333 | 33 | - | 0.0754 |
12.8 | 36 | - | 0.0607 |
13.8667 | 39 | - | 0.0725 |
14.2222 | 40 | 0.8978 | - |
14.9333 | 42 | - | 0.0747 |
16.0 | 45 | - | 0.0766 |
16.7111 | 47 | - | 0.0756 |
17.7778 | 50 | 0.8563 | 0.0757 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}