Research

Paper

AI LLM March 03, 2026

CoShadow: Multi-Object Shadow Generation for Image Compositing via Diffusion Model

Authors

Waqas Ahmed, Dean Diepeveen, Ferdous Sohel

Abstract

Realistic shadow generation is crucial for achieving seamless image compositing, yet existing methods primarily focus on single-object insertion and often fail to generalize when multiple foreground objects are composited into a background scene. In practice, however, modern compositing pipelines and real-world applications often insert multiple objects simultaneously, necessitating shadows that are jointly consistent in terms of geometry, attachment, and location. In this paper, we address the under-explored problem of multi-object shadow generation, aiming to synthesize physically plausible shadows for multiple inserted objects. Our approach exploits the multimodal capabilities of a pre-trained text-to-image diffusion model. An image pathway injects dense, multi-scale features to provide fine-grained spatial guidance, while a text-based pathway encodes per-object shadow bounding boxes as learned positional tokens and fuses them via cross-attention. An attention-alignment loss further grounds these tokens to their corresponding shadow regions. To support this task, we augment the DESOBAv2 dataset by constructing composite scenes with multiple inserted objects and automatically derive prompts combining object category and shadow positioning information. Experimental results demonstrate that our method achieves state-of-the-art performance in both single and multi-object shadow generation settings.

Metadata

arXiv ID: 2603.02743
Provider: ARXIV
Primary Category: cs.CV
Published: 2026-03-03
Fetched: 2026-03-04 03:41

Related papers

Raw Data (Debug)
{
  "raw_xml": "<entry>\n    <id>http://arxiv.org/abs/2603.02743v1</id>\n    <title>CoShadow: Multi-Object Shadow Generation for Image Compositing via Diffusion Model</title>\n    <updated>2026-03-03T08:45:28Z</updated>\n    <link href='https://arxiv.org/abs/2603.02743v1' rel='alternate' type='text/html'/>\n    <link href='https://arxiv.org/pdf/2603.02743v1' rel='related' title='pdf' type='application/pdf'/>\n    <summary>Realistic shadow generation is crucial for achieving seamless image compositing, yet existing methods primarily focus on single-object insertion and often fail to generalize when multiple foreground objects are composited into a background scene. In practice, however, modern compositing pipelines and real-world applications often insert multiple objects simultaneously, necessitating shadows that are jointly consistent in terms of geometry, attachment, and location. In this paper, we address the under-explored problem of multi-object shadow generation, aiming to synthesize physically plausible shadows for multiple inserted objects. Our approach exploits the multimodal capabilities of a pre-trained text-to-image diffusion model. An image pathway injects dense, multi-scale features to provide fine-grained spatial guidance, while a text-based pathway encodes per-object shadow bounding boxes as learned positional tokens and fuses them via cross-attention. An attention-alignment loss further grounds these tokens to their corresponding shadow regions. To support this task, we augment the DESOBAv2 dataset by constructing composite scenes with multiple inserted objects and automatically derive prompts combining object category and shadow positioning information. Experimental results demonstrate that our method achieves state-of-the-art performance in both single and multi-object shadow generation settings.</summary>\n    <category scheme='http://arxiv.org/schemas/atom' term='cs.CV'/>\n    <published>2026-03-03T08:45:28Z</published>\n    <arxiv:primary_category term='cs.CV'/>\n    <author>\n      <name>Waqas Ahmed</name>\n    </author>\n    <author>\n      <name>Dean Diepeveen</name>\n    </author>\n    <author>\n      <name>Ferdous Sohel</name>\n    </author>\n  </entry>"
}