YAML Metadata
Warning:
The pipeline tag "text-ranking" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, other
ModernBERT-base trained on GooAQ
This is a Cross Encoder model finetuned from answerdotai/ModernBERT-base using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: answerdotai/ModernBERT-base
- Maximum Sequence Length: 8192 tokens
- Number of Output Labels: 1 label
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("tomaarsen/reranker-ModernBERT-base-gooaq-lambda")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Reranking
- Dataset:
gooaq-dev
- Evaluated with
CrossEncoderRerankingEvaluator
with these parameters:{ "at_k": 10, "always_rerank_positives": false }
Metric | Value |
---|---|
map | 0.7164 (+0.1853) |
mrr@10 | 0.7148 (+0.1908) |
ndcg@10 | 0.7601 (+0.1689) |
Cross Encoder Reranking
- Datasets:
NanoMSMARCO_R100
,NanoNFCorpus_R100
andNanoNQ_R100
- Evaluated with
CrossEncoderRerankingEvaluator
with these parameters:{ "at_k": 10, "always_rerank_positives": true }
Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
---|---|---|---|
map | 0.4853 (-0.0042) | 0.3379 (+0.0769) | 0.5390 (+0.1194) |
mrr@10 | 0.4772 (-0.0003) | 0.5293 (+0.0294) | 0.5479 (+0.1212) |
ndcg@10 | 0.5514 (+0.0110) | 0.3714 (+0.0464) | 0.5941 (+0.0934) |
Cross Encoder Nano BEIR
- Dataset:
NanoBEIR_R100_mean
- Evaluated with
CrossEncoderNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "rerank_k": 100, "at_k": 10, "always_rerank_positives": true }
Metric | Value |
---|---|
map | 0.4541 (+0.0640) |
mrr@10 | 0.5181 (+0.0501) |
ndcg@10 | 0.5056 (+0.0503) |
Training Details
Training Dataset
Unnamed Dataset
- Size: 95,939 training samples
- Columns:
question
,answer
, andlabels
- Approximate statistics based on the first 1000 samples:
question answer labels type string list list details - min: 18 characters
- mean: 43.5 characters
- max: 101 characters
- size: 6 elements
- size: 6 elements
- Samples:
question answer labels can u get ip banned from discord?
['Yes you very much can, infact its already done. When you ban a person its an IP ban (also an account ban) There are no ways to bypass it without a new account.', 'Yes, your account is banned if you see the “Your account has been suspended/terminated for violating the Terms of Service” message when logging in to Pokémon GO.', 'This means that Snap is identifying devices and not users. So if a user, after getting banned, tries to access Snapchat from a different account but the same device, then that account also gets banned automatically. “The jailbreaking ban is apparently actually a device ban.', "When you block someone on Discord, they won't be able to send you private messages, and will servers you share will hide their messages. If the person you blocked was on your Friends list, they'll be removed immediately.", "You will for sure get an e-mail telling you that you were banned. That error happens quite often to me. Just login again from the title screen and game on. It's a commo...
[1, 0, 0, 0, 0, ...]
what is the difference between methylphenidate cd and er?
['Metadate CD is a once-a-day capsule with biphasic release; initially there is a rapid release of methylphenidate, then a continuous-release phase. Metadate ER, on the other hand, is a tablet given two to three times per day.', 'Irregular Heartbeat Risk Associated with Common ADHD Med. Children who take a common drug to treat attention-deficit/hyperactivity disorder may be at an increased risk for developing an irregular heartbeat. The drug, methylphenidate, is the active ingredient in Concerta, Daytrana and Ritalin.', "Vyvanse contains the drug lisdexamfetamine dimesylate, while Ritalin contains the drug methylphenidate. Both Vyvanse and Ritalin are used to treat ADHD symptoms such as poor focus, reduced impulse control, and hyperactivity. However, they're also prescribed to treat other conditions.", 'Tolerance develops to the side effects of Adderall IR and XR in five to seven days. Side effects that persist longer than one week can be quickly managed by lowering the dose or changin...
[1, 0, 0, 0, 0, ...]
who has the most championships in hockey?
['Having lifted the trophy a total of 24 times, the Montreal Canadiens are the team with more Stanley Cup titles than any other franchise.', "['Ivy League – 46 National Championships.', 'Big Ten – 39 National Championships. ... ', 'SEC – 29 National Championships. ... ', 'ACC – 18 National Championships. ... ', 'Independents – 17 National Championships. ... ', 'Pac-12 – 15 National Championships. ... ', 'Big 12 – 11 National Championships. ... ']", 'Boston Celtics center Bill Russell holds the record for the most NBA championships won with 11 titles during his 13-year playing career.', 'Alabama can claim the most NCAA titles in the poll era, with only three of its 15 coming prior. With the 15th title — a win in the College Football Playoff in 2017, coach Nick Saban tied the legendary Bear Bryant with five championships recognized by the NCAA.', 'American football is the most popular sport to watch in the United States, followed by baseball, basketball, and ice hockey, which makes up th...
[1, 0, 0, 0, 0, ...]
- Loss:
LambdaLoss
with these parameters:{ "weighting_scheme": "sentence_transformers.cross_encoder.losses.LambdaLoss.NDCGLoss2PPScheme", "k": null, "sigma": 1.0, "eps": 1e-10, "reduction_log": "binary", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": 16 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 64per_device_eval_batch_size
: 64learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1seed
: 12bf16
: Truedataloader_num_workers
: 4load_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 64per_device_eval_batch_size
: 64per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 4dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | gooaq-dev_ndcg@10 | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
---|---|---|---|---|---|---|---|
-1 | -1 | - | 0.1318 (-0.4594) | 0.0314 (-0.5091) | 0.3145 (-0.0105) | 0.0444 (-0.4562) | 0.1301 (-0.3253) |
0.0007 | 1 | 2.1483 | - | - | - | - | - |
0.0667 | 100 | 2.0302 | - | - | - | - | - |
0.1333 | 200 | 1.0684 | - | - | - | - | - |
0.1667 | 250 | - | 0.7116 (+0.1204) | 0.4469 (-0.0935) | 0.3483 (+0.0233) | 0.6251 (+0.1244) | 0.4734 (+0.0181) |
0.2 | 300 | 0.6541 | - | - | - | - | - |
0.2667 | 400 | 0.5459 | - | - | - | - | - |
0.3333 | 500 | 0.5159 | 0.7425 (+0.1513) | 0.5219 (-0.0186) | 0.3722 (+0.0471) | 0.6300 (+0.1294) | 0.5080 (+0.0526) |
0.4 | 600 | 0.4852 | - | - | - | - | - |
0.4667 | 700 | 0.4655 | - | - | - | - | - |
0.5 | 750 | - | 0.7545 (+0.1633) | 0.5572 (+0.0167) | 0.3726 (+0.0476) | 0.6188 (+0.1182) | 0.5162 (+0.0608) |
0.5333 | 800 | 0.448 | - | - | - | - | - |
0.6 | 900 | 0.4283 | - | - | - | - | - |
0.6667 | 1000 | 0.4296 | 0.7582 (+0.1670) | 0.5540 (+0.0136) | 0.3723 (+0.0473) | 0.6142 (+0.1136) | 0.5135 (+0.0581) |
0.7333 | 1100 | 0.4237 | - | - | - | - | - |
0.8 | 1200 | 0.4165 | - | - | - | - | - |
0.8333 | 1250 | - | 0.7600 (+0.1687) | 0.5574 (+0.0169) | 0.3676 (+0.0426) | 0.5671 (+0.0665) | 0.4974 (+0.0420) |
0.8667 | 1300 | 0.4258 | - | - | - | - | - |
0.9333 | 1400 | 0.4192 | - | - | - | - | - |
1.0 | 1500 | 0.425 | 0.7601 (+0.1689) | 0.5514 (+0.0110) | 0.3714 (+0.0464) | 0.5941 (+0.0934) | 0.5056 (+0.0503) |
-1 | -1 | - | 0.7601 (+0.1689) | 0.5514 (+0.0110) | 0.3714 (+0.0464) | 0.5941 (+0.0934) | 0.5056 (+0.0503) |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.49.0
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.0
- Datasets: 2.21.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
LambdaLoss
@inproceedings{wang2018lambdaloss,
title={The lambdaloss framework for ranking metric optimization},
author={Wang, Xuanhui and Li, Cheng and Golbandi, Nadav and Bendersky, Michael and Najork, Marc},
booktitle={Proceedings of the 27th ACM international conference on information and knowledge management},
pages={1313--1322},
year={2018}
}
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for tomaarsen/reranker-ModernBERT-base-gooaq-lambda
Base model
answerdotai/ModernBERT-baseEvaluation results
- Map on gooaq devself-reported0.716
- Mrr@10 on gooaq devself-reported0.715
- Ndcg@10 on gooaq devself-reported0.760
- Map on NanoMSMARCO R100self-reported0.485
- Mrr@10 on NanoMSMARCO R100self-reported0.477
- Ndcg@10 on NanoMSMARCO R100self-reported0.551
- Map on NanoNFCorpus R100self-reported0.338
- Mrr@10 on NanoNFCorpus R100self-reported0.529
- Ndcg@10 on NanoNFCorpus R100self-reported0.371
- Map on NanoNQ R100self-reported0.539