Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting

Weili Zeng1, Yichao Yan1, Qi Zhu1, Zhuo Chen1, Pengzhi Chu1, Weiming Zhao1, Xiaokang Yang1,
1Shanghai Jiao Tong University
Interpolate start reference image.

Infusion demonstrates a remarkable ability to accurately assimilate these concepts and adeptly generate imaginative compositions guided by textual descriptions.

Abstract

Text-to-image (T2I) customization aims to create images that embody specific visual concepts delineated in textual descriptions

However, existing works still face a main challenge, concept overfitting. To tackle this challenge, we first analyze overfitting, categorizing it into concept-agnostic overfitting, which undermines non-customized concept knowledge, and concept-specific overfitting, which is confined to customize on limited modalities, i.e, backgrounds, layouts, styles. To evaluate the overfitting degree, we further introduce two metrics, i.e, Latent Fisher divergence and Wasserstein metric to measure the distribution changes of non-customized and customized concept respectively.

Drawing from the analysis, we propose Infusion, a T2I customization method that enables the learning of target concepts to avoid being constrained by limited training modalities, while preserving non-customized knowledge. Remarkably, Infusion achieves this feat with remarkable efficiency, requiring a mere 11KB of trained parameters. Extensive experiments also demonstrate that our approach outperforms state-of-the-art methods in both single and multi-concept customized generation.

Video

Interpolate start reference image.
Interpolate start reference image.
Interpolate start reference image.

BibTeX

@misc{zeng2024infusion,
      title={Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting}, 
      author={Weili Zeng and Yichao Yan and Qi Zhu and Zhuo Chen and Pengzhi Chu and Weiming Zhao and Xiaokang Yang},
      year={2024},
      eprint={2404.14007},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
}