Zero-1-to-A: Zero-Shot One Image to Animatable Head Avatars Using Video Diffusion
Abstract
Animatable head avatar generation typically requires extensive data for training. To reduce the data requirements, a natural solution is to leverage existing data-free static avatar generation methods, such as pre-trained diffusion models with score distillation sampling (SDS), which align avatars with pseudo ground-truth outputs from the diffusion model. However, directly distilling 4D avatars from video diffusion often leads to over-smooth results due to spatial and temporal inconsistencies in the generated video. To address this issue, we propose Zero-1-to-A, a robust method that synthesizes a spatial and temporal consistency dataset for 4D avatar reconstruction using the video diffusion model. Specifically, Zero-1-to-A iteratively constructs video datasets and optimizes animatable avatars in a progressive manner, ensuring that avatar quality increases smoothly and consistently throughout the learning process. This progressive learning involves two stages: (1) Spatial Consistency Learning fixes expressions and learns from front-to-side views, and (2) Temporal Consistency Learning fixes views and learns from relaxed to exaggerated expressions, generating 4D avatars in a simple-to-complex manner. Extensive experiments demonstrate that Zero-1-to-A improves fidelity, animation quality, and rendering speed compared to existing diffusion-based methods, providing a solution for lifelike avatar creation. Code is publicly available at: https://github.com/ZhenglinZhou/Zero-1-to-A.
Community
Zero-1-to-A is an image-to-4D avatar generation method. It synthesizes a spatial and temporal consistency dataset for 4D avatar reconstruction using the video diffusion model.
project page: https://zhenglinzhou.github.io/Zero-1-to-A/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GaussianMotion: End-to-End Learning of Animatable Gaussian Avatars with Pose Guidance from Text (2025)
- Avat3r: Large Animatable Gaussian Reconstruction Model for High-fidelity 3D Head Avatars (2025)
- LAM: Large Avatar Model for One-shot Animatable Gaussian Head (2025)
- HRAvatar: High-Quality and Relightable Gaussian Head Avatar (2025)
- GaussianIP: Identity-Preserving Realistic 3D Human Generation via Human-Centric Diffusion Prior (2025)
- Snapmoji: Instant Generation of Animatable Dual-Stylized Avatars (2025)
- LHM: Large Animatable Human Reconstruction Model from a Single Image in Seconds (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper