Skip to main content

Research on Generative Design of Car Side Colour Rendering Based on Generative Adversarial Networks

  • Conference paper
  • First Online:
HCI International 2022 – Late Breaking Papers: Ergonomics and Product Design (HCII 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13522))

Included in the following conference series:

Abstract

The traditional artificial color rendering process often relies on the limited personal design experience of the designer. Therefore, the design results are often uncertain. In order to help designers get inspiration for color rendering, a multimodal generation method for car side color rendering schemes was proposed through the multimodal unsupervised image translation framework MUNIT based on generative adversarial network. Firstly, based on the image crawling technology and image batch collection tools, the car colour rendering inspiration dataset consisting of hand-drawn colour pictures of the car side was constructed. After that, the car side hand-drawn images were processed through the image styling processing technology and deep learning pre-trained models to construct the car color rendering design object dataset. Next, the car color rendering generation experiment was conducted through MUNIT framework. Then, the generated images were evaluated in quantitative and qualitative methods to select the best iterative model. Finally, with the experimental data integrated, the intelligent generative design system Analogist for car color rendering was designed. The results of the research showed that the method we proposed can realize the multimodal generation of color rendering schemes of line drawings of car side through an image-to-image inspired approach. It can be concluded that Analogist can assist designers in stimulating design inspiration and improving design efficiency in colour rendering of car styling.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Winkenbach, G., Salesin, D.H.: Computer-generated pen-and-ink illustration. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp. 91–100 (1994)

    Google Scholar 

  2. Zhang, W.: Stylized Rendering Based On A Single Image. Doctor, Shanghai Jiao Tong University (2016)

    Google Scholar 

  3. Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27 (2014)

    Google Scholar 

  4. Huang, X., Liu, M.-Y., Belongie, S., Kautz, J.: Multimodal unsupervised image-to-image translation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 179–196. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_11

    Chapter  Google Scholar 

  5. Hong, Y., Hwang, U., Yoo, J., Yoon, S.: How generative adversarial networks and their variants work: an overview. Acm Comput. Surv. (csur) 52(1), 1–43 (2019)

    Article  Google Scholar 

  6. Chen, F., et al.: A survey about image generation with generative adversarial nets. Chin. J. Comput. 44(02), 347–369 (2021)

    Google Scholar 

  7. Shamsolmoali, P., et al.: Image synthesis with adversarial networks: a comprehensive survey and case studies. Inf. Fus. 72(1), 126–146 (2021)

    Article  Google Scholar 

  8. Li, H.: Research and Application Implementation of Generative Adversarial Networks Based Image Translation. Doctor, Huazhong University of Science and Technology (2018)

    Google Scholar 

  9. Lin, Z., Yin, M., Yang, F., Zhong, C.: Survey of image translation based on conditional generative adversarial network. J. Chin. Comput. Syst. 41(12), 2569–2581 (2020)

    Google Scholar 

  10. Pang, Y., Lin, J., Qin, T., Chen, Z.: Image-to-IMAGE translation: methods and applications. IEEE Trans. Multimedia 24, 3859–3881 (2021)

    Article  Google Scholar 

  11. Isola, P., Zhu, J., Zhou, T., Efros, A.A.: Image-to-Image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  12. Wang, T.C., et al.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018)

    Google Scholar 

  13. Chen, W., Hays, J.: Sketchygan: towards diverse and realistic sketch to image synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9416–9425 (2018)

    Google Scholar 

  14. Zhang, L., Ji, Y., Lin, X., Liu, C.: Style transfer for anime sketches with enhanced residual u-net and auxiliary classifier gan. In: 2017 4th IAPR Asian conference on pattern recognition (ACPR), pp. 506–511. IEEE (2017)

    Google Scholar 

  15. Yi, Z., Zhang, H., Tan, P., Gong, M.: Dualgan: unsupervised dual learning for image-to-image translation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2849–2857 (2017)

    Google Scholar 

  16. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International conference on machine learning, pp. 214–223. PMLR (2017)

    Google Scholar 

  17. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired Image-to-Image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

  18. Cui, Y.R., Liu, Q., Gao, C.Y., Su, Z.: FashionGAN: display your fashion design using conditional generative adversarial nets. In: Computer Graphics Forum, pp. 109–119 (2018)

    Google Scholar 

  19. Yin, Y., Chen, Z., Zhao, Y., Li, J., Zhang, K.: Automated Chinese seal carving art creation with AI assistance. In: 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 394–395. IEEE (2020)

    Google Scholar 

  20. Burnap, A., Liu, Y., Pan, Y., Lee, H., Papalambros, P.: Estimating and exploring the product form design space using deep generative models. In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers (2016)

    Google Scholar 

  21. Zhang, L., Li, C., Wong, T.T.: Two-stage sketch colorization. In: Association for Computing Machinery, pp. 1–14. Association for Computing Machinery (2018)

    Google Scholar 

  22. Xiang, X., et al.: Adversarial open domain adaptation for sketch-to-photo synthesis. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1434–1444 (2022)

    Google Scholar 

  23. Simo-Serra, E., Iizuka, S., Ishikawa, H.: Mastering sketching: adversarial augmentation for structured prediction. ACM Trans. Graph. (TOG) 37(1), 1–13 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

This study was partly supported by the National Natural Science Foundation of China (No. 51905175), the second Batch of 2020 MOE of PRC Industry-University Collaborative Education Program (Program No. 202101042012, Kingfar-CES “Human Factors and Ergonomics” Program), Shanghai Pujiang Talent Program (No. 2019PJC021), the Shanghai Soft Science Key Project (No. 21692196800) and the Smart Travel Art Design Innovation Laboratory (No. 20212679).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yumiao Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ji, Y., Chen, Y. (2022). Research on Generative Design of Car Side Colour Rendering Based on Generative Adversarial Networks. In: Duffy, V.G., Rau, PL.P. (eds) HCI International 2022 – Late Breaking Papers: Ergonomics and Product Design. HCII 2022. Lecture Notes in Computer Science, vol 13522. Springer, Cham. https://doi.org/10.1007/978-3-031-21704-3_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21704-3_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21703-6

  • Online ISBN: 978-3-031-21704-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics