RealisDance-DiT: Simple yet Strong Baseline towards Controllable Character Animation in the Wild
Abstract
Controllable character animation remains a challenging problem, particularly in handling rare poses, stylized characters, character-object interactions, complex illumination, and dynamic scenes. To tackle these issues, prior work has largely focused on injecting pose and appearance guidance via elaborate bypass networks, but often struggles to generalize to open-world scenarios. In this paper, we propose a new perspective that, as long as the foundation model is powerful enough, straightforward model modifications with flexible fine-tuning strategies can largely address the above challenges, taking a step towards controllable character animation in the wild. Specifically, we introduce RealisDance-DiT, built upon the Wan-2.1 video foundation model. Our sufficient analysis reveals that the widely adopted Reference Net design is suboptimal for large-scale DiT models. Instead, we demonstrate that minimal modifications to the foundation model architecture yield a surprisingly strong baseline. We further propose the low-noise warmup and "large batches and small iterations" strategies to accelerate model convergence during fine-tuning while maximally preserving the priors of the foundation model. In addition, we introduce a new test dataset that captures diverse real-world challenges, complementing existing benchmarks such as TikTok dataset and UBC fashion video dataset, to comprehensively evaluate the proposed method. Extensive experiments show that RealisDance-DiT outperforms existing methods by a large margin.
Community
The code and ckpts of RealisDance-DiT will be released later because we have not yet received company approval.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DynamiCtrl: Rethinking the Basic Structure and the Role of Text for High-quality Human Image Animation (2025)
- UniAnimate-DiT: Human Image Animation with Large-Scale Video Diffusion Transformer (2025)
- Beyond Static Scenes: Camera-controllable Background Generation for Human Motion (2025)
- Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation (2025)
- DreamActor-M1: Holistic, Expressive and Robust Human Image Animation with Hybrid Guidance (2025)
- ObjectMover: Generative Object Movement with Video Prior (2025)
- SkyReels-A2: Compose Anything in Video Diffusion Transformers (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper