MartsoBodziu1994 commited on
Commit
47578a3
·
verified ·
1 Parent(s): 53fb177

Upload 6 files

Browse files
Files changed (6) hide show
  1. .gitignore +2 -0
  2. LICENSE +21 -0
  3. README.md +327 -14
  4. model-card.md +40 -0
  5. pyproject.toml +59 -0
  6. setup.py +3 -0
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ __pycache__/
2
+ suno_bark.egg-info/
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) Suno, Inc
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,14 +1,327 @@
1
- ---
2
- title: Suno Bark
3
- emoji: 🏃
4
- colorFrom: purple
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 5.0.2
8
- app_file: app.py
9
- pinned: false
10
- license: openrail
11
- short_description: 'це модель fine_2.pt '
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ > Notice: Bark is Suno's open-source text-to-speech+ model. If you are looking for our text-to-music models, please visit us on our [web page](https://suno.ai) and join our community on [Discord](https://suno.ai/discord).
2
+
3
+
4
+ # 🐶 Bark
5
+
6
+ [![](https://dcbadge.vercel.app/api/server/J2B2vsjKuE?style=flat&compact=True)](https://suno.ai/discord)
7
+ [![Twitter](https://img.shields.io/twitter/url/https/twitter.com/FM.svg?style=social&label=@suno_ai_)](https://twitter.com/suno_ai_)
8
+
9
+ > 🔗 [Examples](https://suno.ai/examples/bark-v0) • [Suno Studio Waitlist](https://suno-ai.typeform.com/suno-studio) • [Updates](#-updates) • [How to Use](#-usage-in-python) • [Installation](#-installation) • [FAQ](#-faq)
10
+
11
+ [//]: <br> (vertical spaces around image)
12
+ <br>
13
+ <p align="center">
14
+ <img src="https://user-images.githubusercontent.com/5068315/235310676-a4b3b511-90ec-4edf-8153-7ccf14905d73.png" width="500"></img>
15
+ </p>
16
+ <br>
17
+
18
+ Bark is a transformer-based text-to-audio model created by [Suno](https://suno.ai). Bark can generate highly realistic, multilingual speech as well as other audio - including music, background noise and simple sound effects. The model can also produce nonverbal communications like laughing, sighing and crying. To support the research community, we are providing access to pretrained model checkpoints, which are ready for inference and available for commercial use.
19
+
20
+ ## ⚠ Disclaimer
21
+ Bark was developed for research purposes. It is not a conventional text-to-speech model but instead a fully generative text-to-audio model, which can deviate in unexpected ways from provided prompts. Suno does not take responsibility for any output generated. Use at your own risk, and please act responsibly.
22
+
23
+ ## 📖 Quick Index
24
+ * [🚀 Updates](#-updates)
25
+ * [💻 Installation](#-installation)
26
+ * [🐍 Usage](#-usage-in-python)
27
+ * [🌀 Live Examples](https://suno.ai/examples/bark-v0)
28
+ * [❓ FAQ](#-faq)
29
+
30
+ ## 🎧 Demos
31
+
32
+ [![Open in Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue.svg)](https://huggingface.co/spaces/suno/bark)
33
+ [![Open on Replicate](https://img.shields.io/badge/®️-Open%20on%20Replicate-blue.svg)](https://replicate.com/suno-ai/bark)
34
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eJfA2XUa-mXwdMy7DoYKVYHI1iTd9Vkt?usp=sharing)
35
+
36
+ ## 🚀 Updates
37
+
38
+ **2023.05.01**
39
+ - ©️ Bark is now licensed under the MIT License, meaning it's now available for commercial use!
40
+ - ⚡ 2x speed-up on GPU. 10x speed-up on CPU. We also added an option for a smaller version of Bark, which offers additional speed-up with the trade-off of slightly lower quality.
41
+ - 📕 [Long-form generation](notebooks/long_form_generation.ipynb), voice consistency enhancements and other examples are now documented in a new [notebooks](./notebooks) section.
42
+ - 👥 We created a [voice prompt library](https://suno-ai.notion.site/8b8e8749ed514b0cbf3f699013548683?v=bc67cff786b04b50b3ceb756fd05f68c). We hope this resource helps you find useful prompts for your use cases! You can also join us on [Discord](https://suno.ai/discord), where the community actively shares useful prompts in the **#audio-prompts** channel.
43
+ - 💬 Growing community support and access to new features here:
44
+
45
+ [![](https://dcbadge.vercel.app/api/server/J2B2vsjKuE)](https://suno.ai/discord)
46
+
47
+ - 💾 You can now use Bark with GPUs that have low VRAM (<4GB).
48
+
49
+ **2023.04.20**
50
+ - 🐶 Bark release!
51
+
52
+ ## 🐍 Usage in Python
53
+
54
+ <details open>
55
+ <summary><h3>🪑 Basics</h3></summary>
56
+
57
+ ```python
58
+ from bark import SAMPLE_RATE, generate_audio, preload_models
59
+ from scipy.io.wavfile import write as write_wav
60
+ from IPython.display import Audio
61
+
62
+ # download and load all models
63
+ preload_models()
64
+
65
+ # generate audio from text
66
+ text_prompt = """
67
+ Hello, my name is Suno. And, uh — and I like pizza. [laughs]
68
+ But I also have other interests such as playing tic tac toe.
69
+ """
70
+ audio_array = generate_audio(text_prompt)
71
+
72
+ # save audio to disk
73
+ write_wav("bark_generation.wav", SAMPLE_RATE, audio_array)
74
+
75
+ # play text in notebook
76
+ Audio(audio_array, rate=SAMPLE_RATE)
77
+ ```
78
+
79
+ [pizza1.webm](https://user-images.githubusercontent.com/34592747/cfa98e54-721c-4b9c-b962-688e09db684f.webm)
80
+
81
+ </details>
82
+
83
+ <details open>
84
+ <summary><h3>🌎 Foreign Language</h3></summary>
85
+ <br>
86
+ Bark supports various languages out-of-the-box and automatically determines language from input text. When prompted with code-switched text, Bark will attempt to employ the native accent for the respective languages. English quality is best for the time being, and we expect other languages to further improve with scaling.
87
+ <br>
88
+ <br>
89
+
90
+ ```python
91
+
92
+ text_prompt = """
93
+ ���석은 내가 가장 좋아하는 명절이다. 나는 며칠 동안 휴식을 취하고 친구 및 가족과 시간을 보낼 수 있습니다.
94
+ """
95
+ audio_array = generate_audio(text_prompt)
96
+ ```
97
+ [suno_korean.webm](https://user-images.githubusercontent.com/32879321/235313033-dc4477b9-2da0-4b94-9c8b-a8c2d8f5bb5e.webm)
98
+
99
+ *Note: since Bark recognizes languages automatically from input text, it is possible to use, for example, a german history prompt with english text. This usually leads to english audio with a german accent.*
100
+ ```python
101
+ text_prompt = """
102
+ Der Dreißigjährige Krieg (1618-1648) war ein verheerender Konflikt, der Europa stark geprägt hat.
103
+ This is a beginning of the history. If you want to hear more, please continue.
104
+ """
105
+ audio_array = generate_audio(text_prompt)
106
+ ```
107
+ [suno_german_accent.webm](https://user-images.githubusercontent.com/34592747/3f96ab3e-02ec-49cb-97a6-cf5af0b3524a.webm)
108
+
109
+
110
+
111
+
112
+ </details>
113
+
114
+ <details open>
115
+ <summary><h3>🎶 Music</h3></summary>
116
+ Bark can generate all types of audio, and, in principle, doesn't see a difference between speech and music. Sometimes Bark chooses to generate text as music, but you can help it out by adding music notes around your lyrics.
117
+ <br>
118
+ <br>
119
+
120
+ ```python
121
+ text_prompt = """
122
+ ♪ In the jungle, the mighty jungle, the lion barks tonight ♪
123
+ """
124
+ audio_array = generate_audio(text_prompt)
125
+ ```
126
+ [lion.webm](https://user-images.githubusercontent.com/5068315/230684766-97f5ea23-ad99-473c-924b-66b6fab24289.webm)
127
+ </details>
128
+
129
+ <details open>
130
+ <summary><h3>🎤 Voice Presets</h3></summary>
131
+
132
+ Bark supports 100+ speaker presets across [supported languages](#supported-languages). You can browse the library of supported voice presets [HERE](https://suno-ai.notion.site/8b8e8749ed514b0cbf3f699013548683?v=bc67cff786b04b50b3ceb756fd05f68c), or in the [code](bark/assets/prompts). The community also often shares presets in [Discord](https://discord.gg/J2B2vsjKuE).
133
+
134
+ > Bark tries to match the tone, pitch, emotion and prosody of a given preset, but does not currently support custom voice cloning. The model also attempts to preserve music, ambient noise, etc.
135
+
136
+ ```python
137
+ text_prompt = """
138
+ I have a silky smooth voice, and today I will tell you about
139
+ the exercise regimen of the common sloth.
140
+ """
141
+ audio_array = generate_audio(text_prompt, history_prompt="v2/en_speaker_1")
142
+ ```
143
+
144
+ [sloth.webm](https://user-images.githubusercontent.com/5068315/230684883-a344c619-a560-4ff5-8b99-b4463a34487b.webm)
145
+ </details>
146
+
147
+ ### 📃 Generating Longer Audio
148
+
149
+ By default, `generate_audio` works well with around 13 seconds of spoken text. For an example of how to do long-form generation, see 👉 **[Notebook](notebooks/long_form_generation.ipynb)** 👈
150
+
151
+ <details>
152
+ <summary>Click to toggle example long-form generations (from the example notebook)</summary>
153
+
154
+ [dialog.webm](https://user-images.githubusercontent.com/2565833/235463539-f57608da-e4cb-4062-8771-148e29512b01.webm)
155
+
156
+ [longform_advanced.webm](https://user-images.githubusercontent.com/2565833/235463547-1c0d8744-269b-43fe-9630-897ea5731652.webm)
157
+
158
+ [longform_basic.webm](https://user-images.githubusercontent.com/2565833/235463559-87efe9f8-a2db-4d59-b764-57db83f95270.webm)
159
+
160
+ </details>
161
+
162
+
163
+ ## Command line
164
+ ```commandline
165
+ python -m bark --text "Hello, my name is Suno." --output_filename "example.wav"
166
+ ```
167
+
168
+ ## 💻 Installation
169
+ *‼️ CAUTION ‼️ Do NOT use `pip install bark`. It installs a different package, which is not managed by Suno.*
170
+ ```bash
171
+ pip install git+https://github.com/suno-ai/bark.git
172
+ ```
173
+
174
+ or
175
+
176
+ ```bash
177
+ git clone https://github.com/suno-ai/bark
178
+ cd bark && pip install .
179
+ ```
180
+
181
+
182
+ ## 🤗 Transformers Usage
183
+
184
+ Bark is available in the 🤗 Transformers library from version 4.31.0 onwards, requiring minimal dependencies
185
+ and additional packages. Steps to get started:
186
+
187
+ 1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main:
188
+
189
+ ```
190
+ pip install git+https://github.com/huggingface/transformers.git
191
+ ```
192
+
193
+ 2. Run the following Python code to generate speech samples:
194
+
195
+ ```py
196
+ from transformers import AutoProcessor, BarkModel
197
+
198
+ processor = AutoProcessor.from_pretrained("suno/bark")
199
+ model = BarkModel.from_pretrained("suno/bark")
200
+
201
+ voice_preset = "v2/en_speaker_6"
202
+
203
+ inputs = processor("Hello, my dog is cute", voice_preset=voice_preset)
204
+
205
+ audio_array = model.generate(**inputs)
206
+ audio_array = audio_array.cpu().numpy().squeeze()
207
+ ```
208
+
209
+ 3. Listen to the audio samples either in an ipynb notebook:
210
+
211
+ ```py
212
+ from IPython.display import Audio
213
+
214
+ sample_rate = model.generation_config.sample_rate
215
+ Audio(audio_array, rate=sample_rate)
216
+ ```
217
+
218
+ Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
219
+
220
+ ```py
221
+ import scipy
222
+
223
+ sample_rate = model.generation_config.sample_rate
224
+ scipy.io.wavfile.write("bark_out.wav", rate=sample_rate, data=audio_array)
225
+ ```
226
+
227
+ For more details on using the Bark model for inference using the 🤗 Transformers library, refer to the
228
+ [Bark docs](https://huggingface.co/docs/transformers/main/en/model_doc/bark) or the hands-on
229
+ [Google Colab](https://colab.research.google.com/drive/1dWWkZzvu7L9Bunq9zvD-W02RFUXoW-Pd?usp=sharing).
230
+
231
+
232
+ ## 🛠️ Hardware and Inference Speed
233
+
234
+ Bark has been tested and works on both CPU and GPU (`pytorch 2.0+`, CUDA 11.7 and CUDA 12.0).
235
+
236
+ On enterprise GPUs and PyTorch nightly, Bark can generate audio in roughly real-time. On older GPUs, default colab, or CPU, inference time might be significantly slower. For older GPUs or CPU you might want to consider using smaller models. Details can be found in out tutorial sections here.
237
+
238
+ The full version of Bark requires around 12GB of VRAM to hold everything on GPU at the same time.
239
+ To use a smaller version of the models, which should fit into 8GB VRAM, set the environment flag `SUNO_USE_SMALL_MODELS=True`.
240
+
241
+ If you don't have hardware available or if you want to play with bigger versions of our models, you can also sign up for early access to our model playground [here](https://suno-ai.typeform.com/suno-studio).
242
+
243
+ ## ⚙️ Details
244
+
245
+ Bark is fully generative text-to-audio model devolved for research and demo purposes. It follows a GPT style architecture similar to [AudioLM](https://arxiv.org/abs/2209.03143) and [Vall-E](https://arxiv.org/abs/2301.02111) and a quantized Audio representation from [EnCodec](https://github.com/facebookresearch/encodec). It is not a conventional TTS model, but instead a fully generative text-to-audio model capable of deviating in unexpected ways from any given script. Different to previous approaches, the input text prompt is converted directly to audio without the intermediate use of phonemes. It can therefore generalize to arbitrary instructions beyond speech such as music lyrics, sound effects or other non-speech sounds.
246
+
247
+ Below is a list of some known non-speech sounds, but we are finding more every day. Please let us know if you find patterns that work particularly well on [Discord](https://suno.ai/discord)!
248
+
249
+ - `[laughter]`
250
+ - `[laughs]`
251
+ - `[sighs]`
252
+ - `[music]`
253
+ - `[gasps]`
254
+ - `[clears throat]`
255
+ - `—` or `...` for hesitations
256
+ - `♪` for song lyrics
257
+ - CAPITALIZATION for emphasis of a word
258
+ - `[MAN]` and `[WOMAN]` to bias Bark toward male and female speakers, respectively
259
+
260
+ ### Supported Languages
261
+
262
+ | Language | Status |
263
+ | --- | :---: |
264
+ | English (en) | ✅ |
265
+ | German (de) | ✅ |
266
+ | Spanish (es) | ✅ |
267
+ | French (fr) | ✅ |
268
+ | Hindi (hi) | ✅ |
269
+ | Italian (it) | ✅ |
270
+ | Japanese (ja) | ✅ |
271
+ | Korean (ko) | ✅ |
272
+ | Polish (pl) | ✅ |
273
+ | Portuguese (pt) | ✅ |
274
+ | Russian (ru) | ✅ |
275
+ | Turkish (tr) | ✅ |
276
+ | Chinese, simplified (zh) | ✅ |
277
+
278
+ Requests for future language support [here](https://github.com/suno-ai/bark/discussions/111) or in the **#forums** channel on [Discord](https://suno.ai/discord).
279
+
280
+ ## 🙏 Appreciation
281
+
282
+ - [nanoGPT](https://github.com/karpathy/nanoGPT) for a dead-simple and blazing fast implementation of GPT-style models
283
+ - [EnCodec](https://github.com/facebookresearch/encodec) for a state-of-the-art implementation of a fantastic audio codec
284
+ - [AudioLM](https://github.com/lucidrains/audiolm-pytorch) for related training and inference code
285
+ - [Vall-E](https://arxiv.org/abs/2301.02111), [AudioLM](https://arxiv.org/abs/2209.03143) and many other ground-breaking papers that enabled the development of Bark
286
+
287
+ ## © License
288
+
289
+ Bark is licensed under the MIT License.
290
+
291
+ ## 📱 Community
292
+
293
+ - [Twitter](https://twitter.com/suno_ai_)
294
+ - [Discord](https://suno.ai/discord)
295
+
296
+ ## 🎧 Suno Studio (Early Access)
297
+
298
+ We’re developing a playground for our models, including Bark.
299
+
300
+ If you are interested, you can sign up for early access [here](https://suno-ai.typeform.com/suno-studio).
301
+
302
+ ## ❓ FAQ
303
+
304
+ #### How do I specify where models are downloaded and cached?
305
+ * Bark uses Hugging Face to download and store models. You can see find more info [here](https://huggingface.co/docs/huggingface_hub/package_reference/environment_variables#hfhome).
306
+
307
+
308
+ #### Bark's generations sometimes differ from my prompts. What's happening?
309
+ * Bark is a GPT-style model. As such, it may take some creative liberties in its generations, resulting in higher-variance model outputs than traditional text-to-speech approaches.
310
+
311
+ #### What voices are supported by Bark?
312
+ * Bark supports 100+ speaker presets across [supported languages](#supported-languages). You can browse the library of speaker presets [here](https://suno-ai.notion.site/8b8e8749ed514b0cbf3f699013548683?v=bc67cff786b04b50b3ceb756fd05f68c). The community also shares presets in [Discord](https://suno.ai/discord). Bark also supports generating unique random voices that fit the input text. Bark does not currently support custom voice cloning.
313
+
314
+ #### Why is the output limited to ~13-14 seconds?
315
+ * Bark is a GPT-style model, and its architecture/context window is optimized to output generations with roughly this length.
316
+
317
+ #### How much VRAM do I need?
318
+ * The full version of Bark requires around 12Gb of memory to hold everything on GPU at the same time. However, even smaller cards down to ~2Gb work with some additional settings. Simply add the following code snippet before your generation:
319
+
320
+ ```python
321
+ import os
322
+ os.environ["SUNO_OFFLOAD_CPU"] = "True"
323
+ os.environ["SUNO_USE_SMALL_MODELS"] = "True"
324
+ ```
325
+
326
+ #### My generated audio sounds like a 1980s phone call. What's happening?
327
+ * Bark generates audio from scratch. It is not meant to create only high-fidelity, studio-quality speech. Rather, outputs could be anything from perfect speech to multiple people arguing at a baseball game recorded with bad microphones.
model-card.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card: Bark
2
+
3
+ This is the official codebase for running the text to audio model, from Suno.ai.
4
+
5
+ The following is additional information about the models released here.
6
+
7
+ ## Model Details
8
+
9
+ Bark is a series of three transformer models that turn text into audio.
10
+ ### Text to semantic tokens
11
+ - Input: text, tokenized with [BERT tokenizer from Hugging Face](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer)
12
+ - Output: semantic tokens that encode the audio to be generated
13
+
14
+ ### Semantic to coarse tokens
15
+ - Input: semantic tokens
16
+ - Output: tokens from the first two codebooks of the [EnCodec Codec](https://github.com/facebookresearch/encodec) from facebook
17
+
18
+ ### Coarse to fine tokens
19
+ - Input: the first two codebooks from EnCodec
20
+ - Output: 8 codebooks from EnCodec
21
+
22
+ ### Architecture
23
+ | Model | Parameters | Attention | Output Vocab size |
24
+ |:-------------------------:|:----------:|------------|:-----------------:|
25
+ | Text to semantic tokens | 80 M | Causal | 10,000 |
26
+ | Semantic to coarse tokens | 80 M | Causal | 2x 1,024 |
27
+ | Coarse to fine tokens | 80 M | Non-causal | 6x 1,024 |
28
+
29
+
30
+ ### Release date
31
+ April 2023
32
+
33
+ ## Broader Implications
34
+ We anticipate that this model's text to audio capabilities can be used to improve accessbility tools in a variety of languages.
35
+ Straightforward improvements will allow models to run faster than realtime, rendering them useful for applications such as virtual assistants.
36
+
37
+ While we hope that this release will enable users to express their creativity and build applications that are a force
38
+ for good, we acknowledge that any text to audio model has the potential for dual use. While it is not straightforward
39
+ to voice clone known people with Bark, they can still be used for nefarious purposes. To further reduce the chances of unintended use of Bark,
40
+ we also release a simple classifier to detect Bark-generated audio with high accuracy (see notebooks section of the main repository).
pyproject.toml ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = ["setuptools"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "suno-bark"
7
+ version = "0.0.1a"
8
+ description = "Bark text to audio model"
9
+ readme = "README.md"
10
+ requires-python = ">=3.8"
11
+ authors = [
12
+ {name = "Suno Inc", email = "[email protected]"},
13
+ ]
14
+ # Apache 2.0
15
+ license = {file = "LICENSE"}
16
+
17
+ dependencies = [
18
+ "boto3",
19
+ "encodec",
20
+ "funcy",
21
+ "huggingface-hub>=0.14.1",
22
+ "numpy",
23
+ "scipy",
24
+ "tokenizers",
25
+ "torch",
26
+ "tqdm",
27
+ "transformers",
28
+ ]
29
+
30
+ [project.urls]
31
+ source = "https://github.com/suno-ai/bark"
32
+
33
+ [project.optional-dependencies]
34
+ dev = [
35
+ "bandit",
36
+ "black",
37
+ "codecov",
38
+ "flake8",
39
+ "hypothesis>=6.14,<7",
40
+ "isort>=5.0.0,<6",
41
+ "jupyter",
42
+ "mypy",
43
+ "nbconvert",
44
+ "nbformat",
45
+ "pydocstyle",
46
+ "pylint",
47
+ "pytest",
48
+ "pytest-cov",
49
+ ]
50
+
51
+ [tool.setuptools]
52
+ packages = ["bark"]
53
+
54
+ [tool.setuptools.package-data]
55
+ bark = ["assets/prompts/*.npz", "assets/prompts/v2/*.npz"]
56
+
57
+
58
+ [tool.black]
59
+ line-length = 100
setup.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ from setuptools import setup
2
+
3
+ setup()