--- license: mit language: - en tags: - text-to-speech - speech generation - voice-cloning pipeline_tag: text-to-speech library_name: chatterbox --- cb-big2

Chatterbox TTS

Listen to Demo Samples Open in HF Spaces Insight on Podos
Made with ❤️ by resemble-logo-horizontal
We're excited to introduce Chatterbox, [Resemble AI's](https://resemble.ai) first production-grade open source TTS model. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations. Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open source TTS model to support **emotion exaggeration control**, a powerful feature that makes your voices stand out. Try it now on our [Hugging Face Gradio app.](https://huggingface.co/spaces/ResembleAI/Chatterbox) If you like the model but need to scale or tune it for higher accuracy, check out our competitively priced TTS service (link). It delivers reliable performance with ultra-low latency of sub 200ms—ideal for production use in agents, applications, or interactive media. # Key Details - SoTA zeroshot TTS - 0.5B Llama backbone - Unique exaggeration/intensity control - Ultra-stable with alignment-informed inference - Trained on 0.5M hours of cleaned data - Watermarked outputs - Easy voice conversion script - [Outperforms ElevenLabs](https://podonos.com/resembleai/chatterbox) # Tips - **General Use (TTS and Voice Agents):** - The default settings (`exaggeration=0.5`, `cfg=0.5`) work well for most prompts. - If the reference speaker has a fast speaking style, lowering `cfg` to around `0.3` can improve pacing. - **Expressive or Dramatic Speech:** - Try lower `cfg` values (e.g. `~0.3`) and increase `exaggeration` to around `0.7` or higher. - Higher `exaggeration` tends to speed up speech; reducing `cfg` helps compensate with slower, more deliberate pacing. # Installation ``` pip install chatterbox-tts ``` # Usage ```python import torchaudio as ta from chatterbox.tts import ChatterboxTTS model = ChatterboxTTS.from_pretrained(device="cuda") text = "Ezreal and Jinx teamed up with Ahri, Yasuo, and Teemo to take down the enemy's Nexus in an epic late-game pentakill." wav = model.generate(text) ta.save("test-1.wav", wav, model.sr) # If you want to synthesize with a different voice, specify the audio prompt AUDIO_PROMPT_PATH="YOUR_FILE.wav" wav = model.generate(text, audio_prompt_path=AUDIO_PROMPT_PATH) ta.save("test-2.wav", wav, model.sr) ``` See `example_tts.py` for more examples. # Acknowledgements - [Cosyvoice](https://github.com/FunAudioLLM/CosyVoice) - [HiFT-GAN](https://github.com/yl4579/HiFTNet) - [Llama 3](https://github.com/meta-llama/llama3) # Built-in PerTh Watermarking for Responsible AI Every audio file generated by Chatterbox includes [Resemble AI's Perth (Perceptual Threshold) Watermarker](https://github.com/resemble-ai/perth) - imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy. # Disclaimer Don't use this model to do bad things. Prompts are sourced from freely available data on the internet.