Scaling Properties of Diffusion Models for Perceptual Tasks

CVPR 2025

Rahul Ravishankar*, Zeeshan Patel*, Jathushan Rajasegaran, Jitendra Malik

[Paper] · [Project Page]

In this paper, we argue that iterative computation with diffusion models offers a powerful paradigm for not only generation but also visual perception tasks. We unify tasks such as depth estimation, optical flow, and amodal segmentation under the framework of image-to-image translation, and show how diffusion models benefit from scaling training and test-time compute for these perceptual tasks. Through a careful analysis of these scaling properties, we formulate compute-optimal training and inference recipes to scale diffusion models for visual perception tasks. Our models achieve competitive performance to state-of-the-art methods using significantly less data and compute.

Getting started

You can download our DiT-MoE Generalist model here. Please see instructions on how to use our model in the GitHub README·

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support