Papers
arxiv:2212.03533

Text Embeddings by Weakly-Supervised Contrastive Pre-training

Published on Dec 7, 2022
Authors:
,
,
,
,
,

Abstract

This paper presents E5, a family of state-of-the-art text embeddings that transfer well to a wide range of tasks. The model is trained in a contrastive manner with weak supervision signals from our curated large-scale text pair dataset (called CCPairs). E5 can be readily used as a general-purpose embedding model for any tasks requiring a single-vector representation of texts such as retrieval, clustering, and classification, achieving strong performance in both zero-shot and fine-tuned settings. We conduct extensive evaluations on 56 datasets from the BEIR and MTEB benchmarks. For zero-shot settings, E5 is the first model that outperforms the strong BM25 baseline on the BEIR retrieval benchmark without using any labeled data. When fine-tuned, E5 obtains the best results on the MTEB benchmark, beating existing embedding models with 40x more parameters.

Community

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 61

Browse 61 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2212.03533 in a dataset README.md to link it from this page.

Spaces citing this paper 79

Collections including this paper 1