androstj commited on
Commit
ce39365
·
verified ·
1 Parent(s): 51465d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -1
README.md CHANGED
@@ -8,4 +8,150 @@ tags:
8
 
9
  This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
10
  - Library: https://github.com/facebookresearch/audiobox-aesthetics
11
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
10
  - Library: https://github.com/facebookresearch/audiobox-aesthetics
11
+ - Docs: [More Information Needed]
12
+
13
+ --- README below copied from https://github.com/facebookresearch/audiobox-aesthetics
14
+
15
+ # audiobox-aesthetics
16
+
17
+ [![PyPI - Version](https://img.shields.io/pypi/v/audiobox-aesthetics)](https://pypi.org/project/audiobox-aesthetics/) [![Hugging Face Model](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue)](https://huggingface.co/facebook/audiobox-aesthetics)
18
+
19
+ Unified automatic quality assessment for speech, music, and sound.
20
+
21
+ * Paper [arXiv](https://arxiv.org/abs/2502.05139) / [MetaAI](https://ai.meta.com/research/publications/meta-audiobox-aesthetics-unified-automatic-quality-assessment-for-speech-music-and-sound/).
22
+ * Blogpost [ai.meta.com](https://ai.meta.com/blog/machine-intelligence-research-new-models/)
23
+
24
+ <img src="assets/aes_model.png" alt="Model" height="400px">
25
+
26
+ ## Installation
27
+
28
+ 1. Install via pip
29
+ ```
30
+ pip install audiobox_aesthetics
31
+ ```
32
+
33
+ 2. Install directly from source
34
+
35
+ This repository requires Python 3.9 and Pytorch 2.2 or greater. To install, you can clone this repo and run:
36
+ ```
37
+ pip install -e .
38
+ ```
39
+
40
+ ## Pre-trained Models
41
+
42
+ Model | S3 | HuggingFace
43
+ |---|---|---|
44
+ All axes | [checkpoint.pt](https://dl.fbaipublicfiles.com/audiobox-aesthetics/checkpoint.pt) | [HF Repo](https://huggingface.co/facebook/audiobox-aesthetics)
45
+
46
+ ## Usage
47
+
48
+ ### How to run prediction using CLI:
49
+
50
+ 1. Create a jsonl files with the following format
51
+ ```
52
+ {"path":"/path/to/a.wav"}
53
+ {"path":"/path/to/b.flac"}
54
+ ...
55
+ {"path":"/path/to/z.wav"}
56
+ ```
57
+ or if you only want to predict aesthetic scores from certain timestamp
58
+ ```
59
+ {"path":"/path/to/a.wav", "start_time":0, "end_time": 5}
60
+ {"path":"/path/to/b.flac", "start_time":3, "end_time": 10}
61
+ ```
62
+ and save it as `input.jsonl`
63
+
64
+ 2. Run following command
65
+ ```
66
+ audio-aes input.jsonl --batch-size 100 > output.jsonl
67
+ ```
68
+ If you haven't downloade the checkpoint, the script will try to download it automatically. Otherwise, you can provide the path by `--ckpt /path/to/checkpoint.pt`
69
+
70
+ If you have SLURM, run the following command
71
+ ```
72
+ audio-aes input.jsonl --batch-size 100 --remote --array 5 --job-dir $HOME/slurm_logs/ --chunk 1000 > output.jsonl
73
+ ```
74
+ Please adjust CPU & GPU settings using `--slurm-gpu, --slurm-cpu` depending on your nodes.
75
+
76
+
77
+ 3. Output file will contain the same number of rows as `input.jsonl`. Each row contains 4 axes of prediction with a JSON-formatted dictionary. Check the following table for more info:
78
+
79
+ Axes name | Full name
80
+ |---|---|
81
+ CE | Content Enjoyment
82
+ CU | Content Usefulness
83
+ PC | Production Complexity
84
+ PQ | Production Quality
85
+
86
+ Output line example:
87
+ ```
88
+ {"CE": 5.146, "CU": 5.779, "PC": 2.148, "PQ": 7.220}
89
+ ```
90
+
91
+ 4. (Extra) If you want to extract only one axis (i.e. CE), post-process the output file with the following command using `jq` utility:
92
+
93
+ ```jq '.CE' output.jsonl > output-aes_ce.txt```
94
+
95
+
96
+ ### How to run prediction from Python script or interpreter
97
+
98
+ 1. Infer from file path
99
+ ```
100
+ from audiobox_aesthetics.infer import initialize_predictor
101
+ predictor = initialize_predictor()
102
+ predictor.forward([{"path":"/path/to/a.wav"}, {"path":"/path/to/b.flac"}])
103
+ ```
104
+
105
+ 2. Infer from torch tensor
106
+ ```
107
+ from audiobox_aesthetics.infer import initialize_predictor
108
+ predictor = initialize_predictor()
109
+ wav, sr = torchaudio.load("/path/to/a.wav")
110
+ predictor.forward([{"path":wav, "sample_rate": sr}])
111
+ ```
112
+
113
+ ## Evaluation dataset
114
+ We released our evaluation dataset consisting of 4 axes of aesthetic annotation scores.
115
+
116
+ Here, we show an example of how to read and re-map each annotation to the actual audio file.
117
+ ```
118
+ {
119
+ "data_path": "/your_path/LibriTTS/train-clean-100/1363/139304/1363_139304_000011_000000.wav",
120
+ "Production_Quality": [8.0, 8.0, 8.0, 8.0, 8.0, 9.0, 8.0, 5.0, 8.0, 8.0],
121
+ "Production_Complexity": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
122
+ "Content_Enjoyment": [8.0, 6.0, 8.0, 5.0, 8.0, 8.0, 8.0, 6.0, 8.0, 6.0],
123
+ "Content_Usefulness": [8.0, 6.0, 8.0, 7.0, 8.0, 9.0, 8.0, 6.0, 10.0, 7.0]
124
+ }
125
+ ```
126
+ 1. Recognize the dataset name from data_path. In the example, it is LibriTTS.
127
+ 2. Replace "/your_path/" into your downloaded LibriTTS directory.
128
+ 3. Each axis contains 10 scores annotated by 10 different human annotators.
129
+
130
+ data_path | URL
131
+ |---|---|
132
+ LibriTTS | https://openslr.org/60/
133
+ cv-corpus-13.0-2023-03-09 | https://commonvoice.mozilla.org/en/datasets
134
+ EARS | https://sp-uhh.github.io/ears_dataset/
135
+ MUSDB18 | https://sigsep.github.io/datasets/musdb.html
136
+ musiccaps | https://www.kaggle.com/datasets/googleai/musiccaps
137
+ (audioset) unbalanced_train_segments | https://research.google.com/audioset/dataset/index.html
138
+ PAM | https://zenodo.org/records/10737388
139
+
140
+ ## License
141
+ The majority of audiobox-aesthetics is licensed under CC-BY 4.0, as found in the LICENSE file.
142
+ However, portions of the project are available under separate license terms: [https://github.com/microsoft/unilm](https://github.com/microsoft/unilm) is licensed under MIT license.
143
+
144
+ ## Citation
145
+ If you found this repository useful, please cite the following BibTeX entry.
146
+
147
+ ```
148
+ @article{tjandra2025aes,
149
+ title={Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound},
150
+ author={Andros Tjandra and Yi-Chiao Wu and Baishan Guo and John Hoffman and Brian Ellis and Apoorv Vyas and Bowen Shi and Sanyuan Chen and Matt Le and Nick Zacharov and Carleigh Wood and Ann Lee and Wei-Ning Hsu},
151
+ year={2025},
152
+ url={https://arxiv.org/abs/2502.05139}
153
+ }
154
+ ```
155
+
156
+ ## Acknowledgements
157
+ Part of the model code is copied from [https://github.com/microsoft/unilm/tree/master/wavlm](WavLM).