Skip to main content

Lightweight Attention-CycleGAN for Nighttime-Daytime Image Transformation

  • Conference paper
  • First Online:
Artificial Intelligence Security and Privacy (AIS&P 2024)

Abstract

With the rapid development of deep learning in the field of computer vision, the performance of core vision tasks such as image recognition has achieved significant improvement. In nighttime environment, due to the low-light condition and reduced visibility, cross-domain transformation of nighttime images based on Generative Adversarial Network (GAN) model can effectively improve the accuracy of nighttime recognition models. However, the existing GAN models are difficult to be effectively deployed on resource-constrained devices due to the requirement of high storage space and computational resource. To this end, this paper proposes a shared attention network based on the attention mechanism with the CycleGAN structure, and designs an online knowledge distillation method to compress and optimize the model, so as to obtain a lightweight model to achieve the nighttime-daytime cross-domain image transformation. Experimental results demonstrate that the proposed model achieves the state-of-the-art performance in the task of Nighttime-Daytime Image Transformation. This is of great significance for edge devices to solve the problem of recognition at night.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Pang, Y., Lin, J., Qin, T., Chen, Z.: Image-to-image translation: methods and applications. IEEE Trans. Multimedia 24, 3859–3881 (2021)

    Article  MATH  Google Scholar 

  2. Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inform. Process. Syst. 27 (2014)

    Google Scholar 

  3. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)

  4. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)

    Google Scholar 

  5. Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861(2017)

  6. Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., Bharath, A.A.: Generative adversarial networks: an overview. IEEE Signal Process. Mag. 35(1), 53–65 (2018)

    Article  Google Scholar 

  7. Liu, G., Tang, H., Latapie, H.M., Corso, J.J., Yan, Y.: Cross-view exocentric to egocentric video synthesis. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 974–982 (2021)

    Google Scholar 

  8. Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)

  9. Perarnau, G., Van De Weijer, J., Raducanu, B., Álvarez, J.M.: Invertible conditional gans for image editing. arXiv preprint arXiv:1611.06355 (2016)

  10. Tang, H., Xu, D., Liu, G., Wang, W., Sebe, N., Yan, Y.: Cycle in cycle generative adversarial networks for keypoint-guided image generation. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 2052–2060 (2019)

    Google Scholar 

  11. Tang, H., Wang, W., Xu, D., Yan, Y., Sebe, N.: Gesturegan for hand gesture-to-gesture translation in the wild. In: Proceedings of the 26th ACM international conference on Multimedia, pp. 774–782 (2018)

    Google Scholar 

  12. Tang, H., Xu, D., Sebe, N., Wang, Y., Corso, J.J., Yan, Y.: Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2417–2426 (2019)

    Google Scholar 

  13. Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018)

    Google Scholar 

  14. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232 (2017)

    Google Scholar 

  15. Kim, T., Cha, M., Kim, H., Lee, J.K., Kim, J.: Learning to discover cross-domain relations with generative adversarial networks. In: International Conference on Machine Learning, pp. 1857–1865. PMLR (2017)

    Google Scholar 

  16. Xu, D., Wang, W., Tang, H., Liu, H., Sebe, N., Ricci, E.: Structured attention guided convolutional neural fields for monocular depth estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3917–3925 (2018)

    Google Scholar 

  17. Liang, X., Zhang, H., Xing, E.P.: Generative semantic manipulation with contrasting gan. arXiv preprint arXiv:1708.00315 (2017)

  18. Bahdanau, D.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  19. Vaswani, A.: Attention is all you need. Adv. Neural Inform. Process, Syst (2017)

    MATH  Google Scholar 

  20. Alexey, D.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv: 2010.11929 (2020)

  21. Shen, Z., Zhang, M., Zhao, H., Yi, S., Li, H.: Efficient attention: attention with linear complexities. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3531–3539 (2021)

    Google Scholar 

  22. Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550 (2014)

  23. Jung, C., Kwon, G., Ye, J.C.: Exploring patch-wise semantic relation for contrastive learning in image-to-image translation tasks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18260–18269 (2022)

    Google Scholar 

  24. Chen, P., Liu, S., Zhao, H., Jia, J.: Distilling knowledge via knowledge review. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5008–5017 (2021)

    Google Scholar 

  25. Park, W., Kim, D., Lu, Y., Cho, M.: Relational knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3967–3976 (2019)

    Google Scholar 

  26. Li, M., Lin, J., Ding, Y., Liu, Z., Zhu, J.-Y., Han, S.: Gan compression: efficient architectures for interactive conditional gans. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5284–5294 (2020)

    Google Scholar 

  27. Ren, Y., Wu, J., Xiao, X., Yang, J.: Online multi-granularity distillation for gan compression. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6793–6803 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhili Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, J., Xiao, X., Zhou, H., Yasin, A., Zhou, Z. (2025). Lightweight Attention-CycleGAN for Nighttime-Daytime Image Transformation. In: Zhang, F., Lin, W., Yan, H. (eds) Artificial Intelligence Security and Privacy. AIS&P 2024. Lecture Notes in Computer Science, vol 15399. Springer, Singapore. https://doi.org/10.1007/978-981-96-1148-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-981-96-1148-5_13

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-96-1147-8

  • Online ISBN: 978-981-96-1148-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics