中文阅读

FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis

Home Page arXiv hf_paper

🔥 Latest News!!

  • April 28, 2025: We released the inference code and model weights for audio conditions.

Quickstart

🛠️Installation

Clone the repo:

git clone https://github.com/Fantasy-AMAP/fantasy-talking.git
cd fantasy-talking

Install dependencies:

# Ensure torch >= 2.0.0
pip install -r requirements.txt
# Optional to install flash_attn to accelerate attention computation
pip install flash_attn

🧱Model Download

Models Download Link Notes
Wan2.1-I2V-14B-720P 🤗 Huggingface 🤖 ModelScope Base model
Wav2Vec 🤗 Huggingface 🤖 ModelScope Audio encoder
FantasyTalking model 🤗 Huggingface 🤖 ModelScope Our audio condition weights

Download models using huggingface-cli:

pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.1-I2V-14B-720P --local-dir ./models/Wan2.1-I2V-14B-720P
huggingface-cli download facebook/wav2vec2-base-960h --local-dir ./models/wav2vec2-base-960h
huggingface-cli download acvlab/FantasyTalking --files fantasytalking_model.ckpt --local-dir ./models/fantasytalking_model.ckpt

Download models using modelscope-cli:

pip install modelscope
modelscope download Wan-AI/Wan2.1-I2V-14B-720P --local_dir ./models/Wan2.1-I2V-14B-720P
modelscope download AI-ModelScope/wav2vec2-base-960h --local_dir ./models/wav2vec2-base-960h
modelscope download amap_cvlab/FantasyTalking --files fantasytalking_model.ckpt --local-dir ./models/fantasytalking_model.ckpt

🔑 Inference

python infer.py  --image_path ./assets/images/woman.png --audio_path ./assets/audios/woman.wav 

You can control the character's behavior through the prompt. The recommended range for prompt and audio cfg is [3-7].

python infer.py  --image_path ./assets/images/woman.png --audio_path ./assets/audios/woman.wav --prompt "The person is speaking enthusiastically, with their hands continuously waving." --prompt_cfg_scale 5.0 --audio_cfg_scale 5.0

We present a detailed table here. The model is tested on a single A100.(512x512, 81 frames).

torch_dtype num_persistent_param_in_dit Speed Required VRAM
torch.bfloat16 None (unlimited) 15.5s/it 40G
torch.bfloat16 7*10**9 (7B) 32.8s/it 20G
torch.bfloat16 0 42.6s/it 5G

Gradio Demo

We construct an online demo in Huggingface. For the local gradio demo, you can run:

pip install gradio spaces
python app.py

🧩 Community Works

We ❤️ contributions from the open-source community! If your work has improved FantasyTalking, please inform us.

🔗Citation

If you find this repository useful, please consider giving a star ⭐ and citation

@article{wang2025fantasytalking,
   title={FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis},
   author={Wang, Mengchao and Wang, Qiang and Jiang, Fan and Fan, Yaqi and Zhang, Yunpeng and Qi, Yonggang and Zhao, Kun and Xu, Mu},
   journal={arXiv preprint arXiv:2504.04842},
   year={2025}
 }

Acknowledgments

Thanks to Wan2.1, HunyuanVideo, and DiffSynth-Studio for open-sourcing their models and code, which provided valuable references and support for this project. Their contributions to the open-source community are truly appreciated.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Spaces using acvlab/FantasyTalking 2