Abstract
Large text-to-image models allow for high-quality and diverse synthesis of images from a given text prompt. However, many scenarios require that the content creation be controllable. Recent methods add image-level controls, e.g., edge and depth maps, to manipulate the generation process together with text prompts to obtain desired images. In this work, we propose a decoupling control to disentangle one or multiple objects and individual objects’ shapes and appearances in a given reference set while synthesizing novel renditions and rearranging them in different contexts. Given a set of images as input, we establish mapping relationships between the target’s appearance and different “circles” through fine-tuning a pretrained text-to-image model. We achieve control over the local position of different “circles” by designing a novel local feature loss to decouple multi-targets. Extensive experiments demonstrate that our model can disentangle individual objects and allow for their translation within a scene, as well as arbitrary control over the combination of multiple targets while maintaining appearance consistency among the targets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Shi, J., Wu, C., Liang, J., Liu, X., Duan, N.: DiVAE: photorealistic images synthesis with denoising diffusion decoder. arXiv preprint arXiv:2206.00386 (2022)
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695 (2022)
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical text conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 1(2), 3 (2022)
Liu, V., Chilton, L.B.: Design guidelines for prompt engineering text-to-image generative models. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pp. 1–23 (2022)
Pavlichenko, N., Ustalov, D.: Best prompts for text-to-image models and how to find them. In: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2067–2071 (2023)
Locatello, F., et al.: Challenging common assumptions in the unsupervised learning of disentangled representations. In: International Conference on Machine Learning, pp. 4114–4124. PMLR (2019)
Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)
Saharia, C., et al.: Photorealistic text to image diffusion models with deep language understanding. Adv. Neural. Inf. Process. Syst. 35, 36479–36494 (2022)
Zhang, L., Rao, A., Agrawala, M.: Adding conditional control to text-to-image diffusion models. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3836–3847 (2023)
Mou, C., et al.: T2I-Adapter: learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453 (2023)
Alaluf, Y., Tov, O., Mokady, R., Gal, R., Bermano, A.: HyperStyle: StyleGAN inversion with hyper networks for real image editing. In: Proceedings of the IEEE/CVF conference on computer Vision and pattern recognition, pp. 18511–18521 (2022)
Dinh, T.M., Tran, A.T., Nguyen, R., Hua, B.S.: HyperInverter: improving StyleGAN inversion via hyper network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11389–11398 (2022)
Chen, Z., et al.: Vision transformer adapter for dense predictions. arXiv preprint arXiv:2205.08534 (2022)
Stickland, A.C., Murray, I.: BERT and PALs: projected attention layers for efficient adaptation in multi-task learning. In: International Conference on Machine Learning, pp. 5986–5995. PMLR (2019)
Mallya, A., Davis, D., Lazebnik, S.: Piggyback: adapting a single network to multiple tasks by learning to mask weights. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 67–82 (2018)
Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving language understanding by generative pre-training (2018)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Liu, Y., et al.: ROBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
Li, Z., Wu, J., Koh, I., Tang, Y., Sun, L.: Image synthesis from layout with locality aware mask adaption. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 13819–13828 (2021)
Zhao, B., Meng, L., Yin, W., Sigal, L.: Image generation from layout. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8584–8593 (2019)
Gafni, O., Polyak, A., Ashual, O., Sheynin, S., Parikh, D., Taigman, Y.: Make-A-Scene: scene-based text-to-image generation with human priors. In: European Conference on Computer Vision, pp. 89–106. Springer (2022). https://doi.org/10.1007/978-3-031-19784-0_6
Park, T., Liu, M.Y., Wang, T.C., Zhu, J.Y.: Semantic image synthesis with spatially-adaptive normalization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2337–2346 (2019)
Chen, Q., Koltun, V.: Photographic image synthesis with cascaded refinement networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1511–1520 (2017)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Cao, S., Zhang, X., Wang, J., Zhou, X. (2024). Decoupling Control in Text-to-Image Diffusion Models. In: Huang, DS., Zhang, C., Zhang, Q. (eds) Advanced Intelligent Computing Technology and Applications. ICIC 2024. Lecture Notes in Computer Science, vol 14868. Springer, Singapore. https://doi.org/10.1007/978-981-97-5600-1_27
Download citation
DOI: https://doi.org/10.1007/978-981-97-5600-1_27
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-5599-8
Online ISBN: 978-981-97-5600-1
eBook Packages: Computer ScienceComputer Science (R0)