Skip to main content

Exploiting Exif Data to Improve Image Classification Using Convolutional Neural Networks

  • Conference paper
  • First Online:
Image Analysis and Processing – ICIAP 2023 (ICIAP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14233))

Included in the following conference series:

  • 503 Accesses

Abstract

In addition to photo data, many digital cameras and smartphones capture Exif metadata which contain information about the camera parameters used when a photo was captured. While most semantic image recognition approaches only use pixel data for classification decisions, this work aims to examine whether Exif data can improve image classification performed by Convolutional Neural Networks (CNNs). We compare the classification performance and training time of fusion models that use both, image data and Exif metadata, for image classification in contrast to models that use only image data. The most promising result was obtained with a fusion model which was able to increase the classification accuracy for the selected target concepts by \(7.5\%\) compared to the baseline, while the average total training time of all fusion models was reduced by \(7.9\%\).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.flickr.com/.

  2. 2.

    The dataset containing references to the Flickr images has been published with the DOI “10.48564/unibafd-vmm4f-m4y33”. The source code associated with this publication has also been published with the DOI “10.48564/unibafd-q73v9-wz721”. The corresponding GitHub repository can be accessed through the provided references.

References

  1. Arbinger, C., Bullin, M., Henrich, A.: Exploiting geodata to improve image recognition with deep learning. In: Companion Proceedings of the Web Conference 2022, WWW 2022, pp. 648–655. ACM, New York, NY, USA (2022). https://doi.org/10.1145/3487553.3524645

  2. Boutell, M., Luo, J.: Beyond pixels: exploiting camera metadata for photo classification. Pattern Recogn. 38(6), 935–946 (2005). https://doi.org/10.1016/j.patcog.2004.11.013, https://www.sciencedirect.com/science/article/pii/S0031320304003978

  3. Electronics, J., Information Technology Industries Association, J.: Exchangeable image file format for digital still cameras: Exif version 2.32. Technical report, Camera & Imaging Products Association, May 2019. https://www.cipa.jp/std/documents/download_e.html?DC-008-Translation-2019-E

  4. Emmanuel, T., Maupong, T., Mpoeleng, D., Semong, T., Mphago, B., Tabona, O.: A survey on missing data in machine learning. J. Big Data 8(1), 140 (2021). https://doi.org/10.1186/s40537-021-00516-9

  5. Fisher, A., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. JMLR 20, 177 (2019)

    MathSciNet  MATH  Google Scholar 

  6. Flickr: The app garden, API Documentation. www.flickr.com/services/api/. Accessed 13 Dec 2022

  7. Ghazali, J., et al.: Image classification using EXIF metadata. Int. J. Eng. Trends Technol. 1, 69–73 (2020). https://doi.org/10.14445/22315381/CATI3P211

  8. Hand, M.: Ubiquitous photography. Polity (2012)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  10. Kannojia, S., Jaiswal, G.: Effects of varying resolution on performance of CNN based image classification an experimental study. Int. J. Comput. Sci. Eng. 6, 451–456 (2018). https://doi.org/10.26438/ijcse/v6i9.451456

  11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017). https://doi.org/10.1145/3065386

  12. Ku, W., Kankanhalli, M.S., Lim, J.-H.: Using camera settings templates (“Scene Modes’’) for image scene classification of photographs taken on manual/expert settings. In: Ip, H.H.-S., Au, O.C., Leung, H., Sun, M.-T., Ma, W.-Y., Hu, S.-M. (eds.) PCM 2007. LNCS, vol. 4810, pp. 10–17. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-77255-2_2

    Chapter  Google Scholar 

  13. Lazebnik, S.: Computer Vision: A Reference Guide, Object Class Recognition (Categorization), pp. 533–536. Springer, Cham (2014). https://doi.org/10.1007/978-0-387-31439-6_337

  14. Luo, J., Boutell, M., Brown, C.: Pictures are not taken in a vacuum - an overview of exploiting context for semantic scene content understanding. IEEE Signal Process. Mag. 23(2), 101–114 (2006). https://doi.org/10.1109/MSP.2006.1598086

    Article  Google Scholar 

  15. Maître, H.: From Photon to Pixel: The Digital Camera Handbook. Wiley, New York (2017). https://doi.org/10.1002/9781119402442, https://onlinelibrary.wiley.com/doi/abs/10.1002/9781119402442.ch1

  16. Negoescu, R.A., Gatica-Perez, D.: Analyzing Flickr groups. In: Proceedings of the 2008 international Conference on Content-Based Image and Video Retrieval, CIVR 2008, pp. 417–426. ACM, New York, NY, USA (2008). https://doi.org/10.1145/1386352.1386406

  17. Safonov, I.V., Kurilin, I.V., Rychagov, M.N., Tolstaya, E.V.: Image enhancement pipeline based on EXIF metadata. In: Adaptive Image Processing Algorithms for Printing. SCT, pp. 65–83. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-6931-4_3

    Chapter  Google Scholar 

  18. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520, June 2018. https://doi.org/10.1109/CVPR.2018.00474

  19. Shorten, C., Khoshgoftaar, T.M.: A survey on image data augmentation for deep learning. J. Big Data 6(1), 60 (2019). https://doi.org/10.1186/s40537-019-0197-0

  20. Tan, M., Le, Q.V.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114 (2019)

    Google Scholar 

  21. Thread, F.H.F.O.: Updates on tags, May 2015. https://www.flickr.com/help/forum/en-us/72157652019487118/. Accessed 8 Nov 2022

  22. Yu, T., Zhu, H.: Hyper-parameter optimization: a review of algorithms and applications. ArXiv arXiv:abs/2003.05689 (2020)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Martin Bullin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lederer, R., Bullin, M., Henrich, A. (2023). Exploiting Exif Data to Improve Image Classification Using Convolutional Neural Networks. In: Foresti, G.L., Fusiello, A., Hancock, E. (eds) Image Analysis and Processing – ICIAP 2023. ICIAP 2023. Lecture Notes in Computer Science, vol 14233. Springer, Cham. https://doi.org/10.1007/978-3-031-43148-7_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43148-7_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43147-0

  • Online ISBN: 978-3-031-43148-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics