Commit
·
4989262
1
Parent(s):
29579af
Update README.md
Browse files
README.md
CHANGED
@@ -26,9 +26,7 @@ Two versions of pretrained W2V2 models are available:
|
|
26 |
- **LB1100/checkpoint_best.pt** pretrained using 1100-hour of LB home recordings collected from 110 families of children under 5-year-old
|
27 |
- **LL4300/checkpoint_best.pt** pretrained using 1100-hour of LB home recordings collected from 110 families + 3200-hour of LENA home recordings from 275 families of children under 5-year-old
|
28 |
|
29 |
-
## Model Sources
|
30 |
-
|
31 |
-
<!-- Provide the basic links for the model. -->
|
32 |
For more information regarding this model, please checkout our paper
|
33 |
- **Paper [optional]:** [More Information Needed]
|
34 |
|
@@ -41,9 +39,8 @@ We develop fine-tuning recipe using SpeechBrain toolkit available at
|
|
41 |
## Quick Start [optional]
|
42 |
|
43 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
44 |
-
If you wish to use fairseq framework, the following code snippet can be used to load
|
45 |
|
46 |
-
[More Information Needed]
|
47 |
|
48 |
# Evaluation
|
49 |
|
@@ -56,7 +53,7 @@ We test 4 unlabeled datasets on unsupervised pretrained W2V2-base models:
|
|
56 |
We then fine-tune pretrained models on 11.7h of LB labeled home recordings, the f1 scores across three tasks are
|
57 |
|
58 |

|
59 |
-
For more details, please refer to our paper.
|
60 |
|
61 |
# Citation
|
62 |
|
|
|
26 |
- **LB1100/checkpoint_best.pt** pretrained using 1100-hour of LB home recordings collected from 110 families of children under 5-year-old
|
27 |
- **LL4300/checkpoint_best.pt** pretrained using 1100-hour of LB home recordings collected from 110 families + 3200-hour of LENA home recordings from 275 families of children under 5-year-old
|
28 |
|
29 |
+
## Model Sources
|
|
|
|
|
30 |
For more information regarding this model, please checkout our paper
|
31 |
- **Paper [optional]:** [More Information Needed]
|
32 |
|
|
|
39 |
## Quick Start [optional]
|
40 |
|
41 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
42 |
+
If you wish to use fairseq framework, the following code snippet can be used to load our pretrained model
|
43 |
|
|
|
44 |
|
45 |
# Evaluation
|
46 |
|
|
|
53 |
We then fine-tune pretrained models on 11.7h of LB labeled home recordings, the f1 scores across three tasks are
|
54 |
|
55 |

|
56 |
+
For more details of experiments and results, please refer to our paper.
|
57 |
|
58 |
# Citation
|
59 |
|