--- license: mit language: - multilingual tags: - zero-shot-classification - text-classification - pytorch metrics: - accuracy - f1-score extra_gated_prompt: Our models are intended for academic use only. If you are not affiliated with an academic institution, please provide a rationale for using our models. Please allow us a few business days to manually review subscriptions. extra_gated_fields: Name: text Country: country Institution: text Institution Email: text Please specify your academic use case: text --- # xlm-roberta-large-pooled-cap-minor ## Model description An `xlm-roberta-large` model finetuned on multilingual (english, danish) training data labelled with [minor topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/). ## How to use the model ```python from transformers import AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large") pipe = pipeline( model="poltextlab/xlm-roberta-large-pooled-cap-minor", task="text-classification", tokenizer=tokenizer, use_fast=False, token="" ) text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities." pipe(text) ``` ### Gated access Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead. ## Model performance The model was evaluated on a test set of 15 349 english examples (20% of the english data).
* Accuracy: **0.65**. * Weighted Average F1-score: **0.64** ## Inference platform This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research. ## Cooperation Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com). ## Debugging and issues This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually. If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.