Explore In-Context Segmentation via Latent Diffusion Models

1Peking University 2Skywork AI 3Nanyang Technological University 4Fudan University 5Wuhan University 6Zhejiang University

We propose LDIS, a minimalist LDM framework for in-context segmentation.

Abstract

In-context segmentation has drawn increasing attention with the advent of vision foundation models. Its goal is to segment objects using given reference images. Most existing approaches adopt metric learning or masked image modeling to build the correlation between visual prompts and input image queries. This work approaches the problem from a fresh perspective - unlocking the capability of the latent diffusion model (LDM) for in-context segmentation and investigating different design choices. Specifically, we examine the problem from three angles: instruction extraction, output alignment, and meta-architectures. We design a two-stage masking strategy to prevent interfering information from leaking into the instructions. In addition, we propose an augmented pseudo-masking target to ensure the model predicts without forgetting the original images. Moreover, we build a new and fair in-context segmentation benchmark that covers both image and video datasets. Experiments validate the effectiveness of our approach, demonstrating comparable or even stronger results than previous specialist or visual foundation models. We hope our work inspires others to rethink the unification of segmentation and generation.

LDIS Framework

LDIS operates as a minimalist and generates the mask under the guidance of in-context instructions. The two variants differ in input formulation, denoising time steps, and optimization target.

Visual Results

Visual results on several challenging cases.

Visualizations at different time steps.

BibTeX

@article{wang2024explore,
      title={Explore In-Context Segmentation via Latent Diffusion Models},
      author={Wang, Chaoyang and Li, Xiangtai and Ding, Henghui and Qi, Lu and Zhang, Jiangning and Tong, Yunhai and Loy, Chen Change and Yan, Shuicheng},
      journal={arXiv preprint arXiv:2403.09616},
      year={2024}
    }