Horizontal Direction Control
We fix θ, s, I, and vary φ.
Realistic shadow generation is a critical component for high-quality image compositing and visual effects, yet existing methods suffer from certain limitations: Physics-based approaches require a 3D scene geometry, which is often unavailable, while learning-based techniques struggle with control and visual artifacts. We introduce a novel method for fast, controllable, and background-free shadow generation for 2D object images. We create a large synthetic dataset using a 3D rendering engine to train a diffusion model for controllable shadow generation, generating shadow maps for diverse light source parameters. Through extensive ablation studies, we find that rectified flow objective achieves high-quality results with just a single sampling step enabling real-time applications. Furthermore, our experiments demonstrate that the model generalizes well to real-world images. To facilitate further research in evaluating quality and controllability in shadow generation, we release a new public benchmark containing a diverse set of object images and shadow maps in various settings.
We utilize synthetic data to train our novel, single-step, background-free, and controllable shadow generation diffusion model. During the creation of our synthetic dataset, we employ the spherical coordinate system to strategically position the camera, 3D model, and the light source. To ensure shadow controllability, we integrate light parameters, S=(θ, φ s), into the denoising network. θ and φ correspond to polar and azimuthal angles of the light source, respectively, while s is the size of the light source, controlling the shadow softness. Our model is trained with rectified flow to enable a single sampling step during the inference stage. The image on the right illustrates the complete inference pipeline.
Qualitative results on real images are shown here.
Some example renders from our new public benchmark are shown here.
We fix θ, s, I, and vary φ.
We fix φ, s, I, and vary θ.
We fix θ and φ and vary s and I.
We vary θ, φ, s, and fix I.
With no existing dataset available to evaluate our pipeline's performance, we decide to create a new benchmark specifically for this task and make it publicly accessible. Our new test set includes three tracks, each carefully designed to assess the model's performance in controlling shadow softness, as well as horizontal and vertical direction. We create the samples for each track as:
Renders for two 3D meshes from the softness control track. θ=30 and φ=0.
Renders for one 3D mesh from the horizontal shadow direction control track. θ=30 and s=2.
Renders for two 3D meshes from the vertical shadow direction control track. φ=0 and s=2.
@misc{
title={Controllable Shadow Generation with Single-Step Diffusion Models from Synthetic Data},
author={Tasar, Onur and Chadebec, Clement and Aubin, Benjamin},
year={2024},
eprint={2412.11972},
archivePrefix={arXiv},
primaryClass={cs.CV}
}