---
dataset_info:
- config_name: en2de
  features:
  - name: path
    dtype: string
  - name: sentence
    dtype: float64
  - name: split
    dtype: string
  - name: lang
    dtype: string
  - name: task
    dtype: string
  - name: inst
    dtype: string
  - name: suffix
    dtype: string
  - name: st_system
    dtype: string
  - name: metric_score_xcomet-xl
    dtype: float64
  - name: metric_score_metricx-23-xl
    dtype: float64
  splits:
  - name: test
    num_bytes: 1150690
    num_examples: 3500
  - name: test_seamlv2
    num_bytes: 161689
    num_examples: 500
  - name: test_seamlar
    num_bytes: 161599
    num_examples: 500
  - name: test_seammid
    num_bytes: 161887
    num_examples: 500
  - name: test_tfw2vlg
    num_bytes: 162851
    num_examples: 500
  - name: test_tfmidmc
    num_bytes: 173183
    num_examples: 500
  - name: test_tfsmlmc
    num_bytes: 165835
    num_examples: 500
  - name: test_tfsmlcv
    num_bytes: 163646
    num_examples: 500
  download_size: 569246
  dataset_size: 2301380
- config_name: es2en
  features:
  - name: path
    dtype: string
  - name: sentence
    dtype: float64
  - name: split
    dtype: string
  - name: lang
    dtype: string
  - name: task
    dtype: string
  - name: inst
    dtype: string
  - name: suffix
    dtype: string
  - name: st_system
    dtype: string
  - name: metric_score_xcomet-xl
    dtype: float64
  - name: metric_score_metricx-23-xl
    dtype: float64
  splits:
  - name: test
    num_bytes: 1128742
    num_examples: 3500
  - name: test_whsplv3
    num_bytes: 160913
    num_examples: 500
  - name: test_whsplv2
    num_bytes: 159492
    num_examples: 500
  - name: test_whsplar
    num_bytes: 157929
    num_examples: 500
  - name: test_whspmid
    num_bytes: 158335
    num_examples: 500
  - name: test_whspsml
    num_bytes: 158008
    num_examples: 500
  - name: test_whspbas
    num_bytes: 163261
    num_examples: 500
  - name: test_whsptny
    num_bytes: 170804
    num_examples: 500
  download_size: 547013
  dataset_size: 2257484
configs:
- config_name: en2de
  data_files:
  - split: test
    path: en2de/test-*
  - split: test_seamlv2
    path: en2de/test_seamlv2-*
  - split: test_seamlar
    path: en2de/test_seamlar-*
  - split: test_seammid
    path: en2de/test_seammid-*
  - split: test_tfw2vlg
    path: en2de/test_tfw2vlg-*
  - split: test_tfmidmc
    path: en2de/test_tfmidmc-*
  - split: test_tfsmlmc
    path: en2de/test_tfsmlmc-*
  - split: test_tfsmlcv
    path: en2de/test_tfsmlcv-*
- config_name: es2en
  data_files:
  - split: test
    path: es2en/test-*
  - split: test_whsplv3
    path: es2en/test_whsplv3-*
  - split: test_whsplv2
    path: es2en/test_whsplv2-*
  - split: test_whsplar
    path: es2en/test_whsplar-*
  - split: test_whspmid
    path: es2en/test_whspmid-*
  - split: test_whspsml
    path: es2en/test_whspsml-*
  - split: test_whspbas
    path: es2en/test_whspbas-*
  - split: test_whsptny
    path: es2en/test_whsptny-*
license: mit
language:
- de
- es
- en
---


# [SpeechQE: Estimating the Quality of Direct Speech Translation](https://aclanthology.org/2024.emnlp-main.1218)
This is a benchmark and training corpus for the task of quality estimation for speech translation (SpeechQE).

We subsample about 80k segments from the training set and 500 from the dev and test of CoVoST2, then run seven different direct ST models to generate the ST hypotheses.
So,`test` split consists of 3500 instances(500*7). We also provide splits for each translation model.
*(We provide `test` split first, and the training corpus will be provided later. However, if you want those quickly, please do not hesitate to ping me (hjhan@umd.edu)!)

## E2E Model Trained with SpeechQE-CoVoST2

|Task | E2E Model | Trained Domain
|---|---|---|
|SpeechQE for English-to-German Speech Translation |[h-j-han/SpeechQE-TowerInstruct-7B-en2de](https://huggingface.co/h-j-han/SpeechQE-TowerInstruct-7B-en2de)| CoVoST2|
|SpeechQE for Spanish-to-English Speech Translation  |[h-j-han/SpeechQE-TowerInstruct-7B-es2en](https://huggingface.co/h-j-han/SpeechQE-TowerInstruct-7B-es2en)|CoVoST2|


## Setup
We provide code in Github repo : https://github.com/h-j-han/SpeechQE  
```bash
$ git clone https://github.com/h-j-han/SpeechQE.git
$ cd SpeechQE
```
```bash
$ conda create -n speechqe Python=3.11 pytorch=2.0.1  pytorch-cuda=11.7 torchvision torchaudio -c pytorch -c nvidia
$ conda activate speechqe
$ pip install -r requirements.txt
```

## Download Audio Data
Download the audio data from Common Voice. Here, we use mozilla-foundation/common_voice_4_0.
```
import datasets
cv4en = datasets.load_dataset(
    "mozilla-foundation/common_voice_4_0", "es", cache_dir='path/to/cv4/download',
)
```
## Evaluation with SpeechQE-CoVoST2
We provide SpeechQE benchmark: [h-j-han/SpeechQE-CoVoST2](https://huggingface.co/datasets/h-j-han/SpeechQE-CoVoST2).
BASE_AUDIO_PATH is the path of downloaded Common Voice dataset.
```bash
$ python speechqe/score_speechqe.py \
    --speechqe_model=h-j-han/SpeechQE-TowerInstruct-7B-es2en \
    --dataset_name=h-j-han/SpeechQE-CoVoST2 \
    --base_audio_path=$BASE_AUDIO_PATH \
    --dataset_config_name=es2en \
    --test_split_name=test \
```


## Reference
Please find details in [this EMNLP24 paper](https://aclanthology.org/2024.emnlp-main.1218) :
```
@misc{han2024speechqe,
    title={SpeechQE: Estimating the Quality of Direct Speech Translation},
    author={HyoJung Han and Kevin Duh and Marine Carpuat},
    year={2024},
    eprint={2410.21485},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
```