LightLab: Controlling Light Sources in Images with Diffusion Models
Abstract
We present a simple, yet effective diffusion-based method for fine-grained, parametric control over light sources in an image. Existing relighting methods either rely on multiple input views to perform inverse rendering at inference time, or fail to provide explicit control over light changes. Our method fine-tunes a diffusion model on a small set of real raw photograph pairs, supplemented by synthetically rendered images at scale, to elicit its photorealistic prior for relighting. We leverage the linearity of light to synthesize image pairs depicting controlled light changes of either a target light source or ambient illumination. Using this data and an appropriate fine-tuning scheme, we train a model for precise illumination changes with explicit control over light intensity and color. Lastly, we show how our method can achieve compelling light editing results, and outperforms existing methods based on user preference.
Community
We recently published a research paper titled: "LightLab: Controlling Light Sources in Images with Diffusion Models" (accepted to SIGGRAPH 25).
In the paper we demonstrate how to achieve control over visible (and ambient) light sources from a single image.
The premise of the paper is that you can generate physically-accurate training data using classic computational photography, from paired images depicting a change in a visible light source.
We also study how supplementing this small set with synthetic renders affects the trained model's results.
Personally the quality of the results was surprising due to the simplicity of the method, and the relatively low-diversity dataset, which is used in a smart manner.
Project page: https://nadmag.github.io/LightLab/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GaSLight: Gaussian Splats for Spatially-Varying Lighting in HDR (2025)
- PRISM: A Unified Framework for Photorealistic Reconstruction and Intrinsic Scene Modeling (2025)
- MirrorVerse: Pushing Diffusion Models to Realistically Reflect the World (2025)
- Diffusion-based G-buffer generation and rendering (2025)
- IntrinsiX: High-Quality PBR Generation using Image Priors (2025)
- Beyond Reconstruction: A Physics Based Neural Deferred Shader for Photo-realistic Rendering (2025)
- Controllable Weather Synthesis and Removal with Video Diffusion Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper