Papers
arxiv:2504.17816

Subject-driven Video Generation via Disentangled Identity and Motion

Published on Apr 23
· Submitted by carpedkm on Apr 28
Authors:
,
,
,
Qi Dai ,
,

Abstract

We propose to train a subject-driven customized video generation model through decoupling the subject-specific learning from temporal dynamics in zero-shot without additional tuning. A traditional method for video customization that is tuning-free often relies on large, annotated video datasets, which are computationally expensive and require extensive annotation. In contrast to the previous approach, we introduce the use of an image customization dataset directly on training video customization models, factorizing the video customization into two folds: (1) identity injection through image customization dataset and (2) temporal modeling preservation with a small set of unannotated videos through the image-to-video training method. Additionally, we employ random image token dropping with randomized image initialization during image-to-video fine-tuning to mitigate the copy-and-paste issue. To further enhance learning, we introduce stochastic switching during joint optimization of subject-specific and temporal features, mitigating catastrophic forgetting. Our method achieves strong subject consistency and scalability, outperforming existing video customization models in zero-shot settings, demonstrating the effectiveness of our framework.

Community

Paper author Paper submitter

We present Subject-to-Video, a tuning-free framework that turns just a single reference images into identity-faithful, motion-smooth videos—trained without any custom video dataset!
Disentangle identity ✕ motion and beat prior personalized T2V models in zero-shot scenarios.

Paper : https://arxiv.org/html/2504.17816v1
Code : https://github.com/carpedkm/disentangled-subject-to-vid
Project page : https://carpedkm.github.io/projects/disentangled_sub/

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2504.17816 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2504.17816 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2504.17816 in a Space README.md to link it from this page.

Collections including this paper 1