Commit
·
69b60da
1
Parent(s):
a7e9884
Update README.md
Browse files
README.md
CHANGED
@@ -24,6 +24,11 @@ Two versions of pretrained W2V2 models are available:
|
|
24 |
- **LB1100/checkpoint_best.pt** pretrained using 1100-hour of LB home recordings collected from 110 families of children under 5-year-old
|
25 |
- **LL4300/checkpoint_best.pt** pretrained using 1100-hour of LB home recordings collected from 110 families + 3200-hour of LENA home recordings from 275 families of children under 5-year-old
|
26 |
|
|
|
|
|
|
|
|
|
|
|
27 |
## Model Sources
|
28 |
For more information regarding this model, please checkout our paper
|
29 |
- **Paper [optional]:** [More Information Needed]
|
@@ -37,7 +42,7 @@ We develop our complete fine-tuning recipe using SpeechBrain toolkit available a
|
|
37 |
## Quick Start
|
38 |
|
39 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
40 |
-
If you wish to use fairseq framework, the following code snippet provides two functions of loading our pretrained model and extracting
|
41 |
|
42 |
<pre><code>
|
43 |
import torch
|
|
|
24 |
- **LB1100/checkpoint_best.pt** pretrained using 1100-hour of LB home recordings collected from 110 families of children under 5-year-old
|
25 |
- **LL4300/checkpoint_best.pt** pretrained using 1100-hour of LB home recordings collected from 110 families + 3200-hour of LENA home recordings from 275 families of children under 5-year-old
|
26 |
|
27 |
+
Two pretrained ECAPA-TDNN speaker embeddings are available:
|
28 |
+
- **ECAPA_TDNN_LB/embedding_model.ckpt** pretrained using 12-hour of labeled LB home recordings collected from 22 families of infants under 14-month-old
|
29 |
+
- **ECAPA_TDNN_LB_LENA/embedding_model.ckpt** pretrained using 12-hour of labeled LB home recordings collected from 22 families + 18-hour of labeled LENA home recordings from 30 families of infants under 14-month-old
|
30 |
+
|
31 |
+
|
32 |
## Model Sources
|
33 |
For more information regarding this model, please checkout our paper
|
34 |
- **Paper [optional]:** [More Information Needed]
|
|
|
42 |
## Quick Start
|
43 |
|
44 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
45 |
+
If you wish to use fairseq framework, the following code snippet provides two functions of loading our pretrained W2V2 model and extracting features.
|
46 |
|
47 |
<pre><code>
|
48 |
import torch
|