File size: 9,051 Bytes
f002295
 
cbe0420
 
 
 
 
 
f002295
cbe0420
 
 
 
9d931e8
 
cbe0420
6d385bc
cbe0420
9d931e8
 
56b253c
 
449ceda
56b253c
 
cbe0420
 
 
996d946
cbe0420
996d946
 
 
 
8fb1706
cbe0420
69b60da
996d946
 
 
69b60da
56b253c
ed96526
cbe0420
6d593b8
cbe0420
 
974c69d
cbe0420
 
cef5701
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5490fe0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6333913
5490fe0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cef5701
cbe0420
 
 
 
6d385bc
cbe0420
 
 
 
e207a8d
29579af
974c69d
d4a3cce
4989262
cbe0420
27aced3
cbe0420
 
 
27aced3
6f52d65
cef5701
 
 
 
 
 
 
81620cf
6f52d65
cbe0420
 
 
069db10
cbe0420
069db10
cbe0420
069db10
11ceea5
 
d4a3cce
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
---
license: openrail
language:
- en
metrics:
- f1
library_name: fairseq
pipeline_tag: audio-classification
---
# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

We explore benefits of unsupervised pretraining of wav2vec 2.0 (W2V2) using large-scale unlabeled home recordings collected using LittleBeats (LB) and LENA (Language Environment Analysis) devices.
LittleBeats is a new infant wearable multi-modal device that we developed, which simultaneously records audio, movement of the infant, as well as heart-rate variablity.
We use W2V2 to advance LB audio pipeline such that it automatically provides reliable labels of speaker diarization and vocalization classifications for family members, including infants, parents, and siblings, at home. 
We show that W2V2 pretrained on thousands hours of large-scale unlabeled home audio outperforms oracle W2V2 pretrained on 960 hours Librispeech released by Facebook/Meta in terms of automatic family audio analysis tasks.

For more details about LittleBeats, check out **https://littlebeats.hdfs.illinois.edu/**

## Model Sources
For more information regarding this model, please checkout our paper
- [Towards Robust Family-Infant Audio Analysis Based on Unsupervised Pretraining of Wav2vec 2.0 on Large-Scale Unlabeled Family Audio](https://arxiv.org/abs/2305.12530)

  
## Model Description

<!-- Provide a longer summary of what this model is. -->
Two versions of pretrained W2V2 models **using fairseq** are available:

- **LB_1100/checkpoint_best.pt**: pretrained using 1100-hour of LB home recordings collected from 110 families of children under 5-year-old
- **LL_4300/checkpoint_best.pt**: pretrained using 1100-hour of LB home recordings collected from 110 families + 3200-hour of LENA home recordings from 275 families of children under 5-year-old

One version of fine-tuned W2V2 models on labeled LB and LENA data **using SpeechBrain** is available:
- **LL_4300_fine_tuned**: pretrained on LL_4300 checkpoint and followed by fine-tuning on labeled LB and LENA home recordings + labeled lab recordings with data augmentation

Two pretrained ECAPA-TDNN speaker embeddings are available:
- **ECAPA_TDNN_LB/embedding_model.ckpt**: pretrained using 12-hour of labeled LB home recordings collected from 22 families of infants under 14-month-old
- **ECAPA_TDNN_LB_LENA/embedding_model.ckpt**: pretrained using 12-hour of labeled LB home recordings collected from 22 families + 18-hour of labeled LENA home recordings from 30 families of infants under 14-month-old


## Uses
**We develop our complete fine-tuning recipe using SpeechBrain toolkit available at**

- **https://github.com/jialuli3/wav2vec_LittleBeats_LENA**


## Quick Start

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
To extract features from pretrained W2V2 model, first install fairseq and speechbrain framework 
  <pre><code>
    pip install fairseq
    pip install speechbrain
  </code></pre>
Download [this code snippet (fairseq_wav2vec.py)](https://huggingface.co/lijialudew/wav2vec_LittleBeats_LENA/blob/main/fairseq_wav2vec.py) in this repo.
Download the pretrained or fine-tuned model weights.
Run the following sample code by importing the **FairseqWav2Vec2** class.

  <pre><code>
  from fairseq_wav2vec import FairseqWav2Vec2
  import torch 
  inputs = torch.rand([10, 6000]) # input wav B x T
  save_path = "your/path/to/LL_4300/checkpoint_best.pt"
  # extract features from all transformer layers
  model = FairseqWav2Vec2(save_path) # Output all features from 12 transformer layers with shapes of 12 x B x T' x D
  # To extract features from a certain transformer layer
  # model = FairseqWav2Vec2(save_path, output_all_hiddens = False, tgt_layer = [1]) # Output features from the first transformer layer
  # to load W2V2 model fine-tuned on LENA and LB audio data
  fine_tuned_path = "your/path/to/LL_4300_fine_tuned/save/CKPT+2022-11-26+14-06-17+00/wav2vec2.ckpt"
  model._load_sb_pretrained_w2v2_parameters(fine_tuned_path)
  # To extract wav2vec features
  outputs = model(inputs)
  print(outputs.shape)
  </code></pre>


<!-- previous comments If you wish to use fairseq framework, the following code snippet provides two functions of loading our pretrained W2V2 model and extracting features. 

  <pre><code>
  import torch
  import torch.nn.functional as F
  from torch import nn
  import fairseq
  import torchaudio
  
  def load_model(model_path, freeze=True):
      '''
      This function loads pretrained model using fairseq framework.
      Arguments
      ---------
      model_path : str
          Path and filename of the pretrained model
      freeze : bool (default: True)
          If True, the model is frozen with no parameter updates through training. 
      '''
      
      model,_,_ = fairseq.checkpoint_utils.load_model_ensemble_and_task([model_path])
      model = model[0]
  
      if freeze:
          model.eval()
          # Freeze parameters
          for param in model.parameters():
              param.requires_grad = False
      else:
          model.train()
          for param in model.parameters():
              param.requires_grad = True
  
      #remove unnecessary components
      model.quantizer = None
      model.project_q = None
      model.target_glu = None
      model.final_proj = None
  
      return model
    
  def extract_features(model, wav, input_norm=None, output_norm=True, tgt_layer=None, output_all_hiddens=False):
      '''
      This function extracts features from w2v2 model. The function extracts the last transformer layer 
      feature by default. It allows for extracting features from certain layer, or features from all layers
      Arguments
      ---------
      model : fairseq wav2vec
      wav : tensor
          audio wav for feature extraction
      input_norm : bool (default: None)
          If True, a layer_norm (affine) will be applied to the input waveform.
      output_norm : bool (default: True)
          If True, a layer_norm (affine) will be applied to the output obtained
          from the wav2vec model.
      tgt_layer : int (default: None)
          Target transformer layer features, 0-indexed.
      output_all_hiddens : bool (default: False)
          Whether to extract features from all layers. Need to set tgt_layer as None
      '''
    
      if input_norm:
          wav = F.layer_norm(wav, wav.shape)
  
      # Extract wav2vec output
      out = model.extract_features(wav, padding_mask=None, mask=False)['x']
      if isinstance(tgt_layer, int):
          out = model.extract_features(wav, padding_mask=None, mask=False, layer=tgt_layer)['x']
      elif output_all_hiddens: 
          features = []
          model.layerdrop = 0
          for i in range(len(out['layer_results'])):
              curr_feature = out['layer_results'][i][0].transpose(0,1)
              features.append(curr_feature)
          out = torch.stack(features)
  
      if output_norm:
          out = F.layer_norm(out, out.shape)
      return out
  
  model=load_model("your/path/to/LL_4300/checkpoint_best.pt")
  audio, fs = torchaudio.load("sample.wav")
  audio = audio.transpose(0,1).squeeze(1)
  features = extract_features(model, audio)
  </code></pre>
  -->
# Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->
We test 4 unlabeled datasets on unsupervised pretrained W2V2-base models:
- **base (oracle version):** originally released version pretrained on ~960-hour unlabeled Librispeech audio
- **Libri960h:** oracle version fine-tuned using 960h Librispeech
- **LB1100h:** pretrain W2V2 using 1100h LB home recordings
- **LL4300h:** pretrain W2V2 using 4300h LB+LENA home recordings 
We then fine-tune pretrained models on 11.7h of LB labeled home recordings, the f1 scores across three tasks are

![results](results.png)

Additionally, we improve our model performances by adding relevant labeled home recordings and using data augmentation techniques of SpecAug and noise/reverberation corruption.
For more details of experiments and results, please refer to our paper.

# Paper/BibTex Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you found this model helpful to you, please cite us as

<pre><code>
@inproceedings{li23e_interspeech,
  author={Jialu Li and Mark Hasegawa-Johnson and Nancy L. McElwain},
  title={{Towards Robust Family-Infant Audio Analysis Based on Unsupervised Pretraining of Wav2vec 2.0 on Large-Scale Unlabeled Family Audio}},
  year=2023,
  booktitle={Proc. INTERSPEECH 2023},
  pages={1035--1039},
  doi={10.21437/Interspeech.2023-460}
}
  </code></pre>

# Model Card Contact
Jialu Li (she, her, hers)

Ph.D candidate @ Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign

E-mail: [email protected]

Homepage: https://sites.google.com/view/jialuli/

Our team: https://littlebeats.hdfs.illinois.edu/team/