Reproducing result on VoxCeleb1-O

#6
by 607HF - opened

When evaluating this model on VoxCeleb1-O, I get an EER of 0.83, at a threshold of 0.33. Now, this isn't that far off the 0.66 reported on HuggingFace or the 0.68 reported in the paper, but I do still wonder what the deviation might come from. I am using cosine similarity as implemented in PyTorch.

I switched to using the score calculation from verify_speakers, rather than doing it myself (this did require a code change, as this method normally only gives back a True or False, and not the actual score, required to calculate the EER). I now get a threshold of 0.67, quite close to the default of 0.70, but the EER is still 0.83%, so it seems the deviation was not because of my score calculation.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment