Skip to main content

Advertisement

Log in

Human authentication based on fusion of thermal and visible face images

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In recent past, considerable amount of research has been done to increase the performance of a face authentication system in uncontrolled environment such as illumination. However, the performance has not been improved significantly since visible face images are dependent of illumination. To overcome the limitation of visible face images, researchers are using infrared (IR) face images. However, it also does not completely independent of illumination. Image fusion of visible and thermal face images is an alternative in research community nowadays. In this work, a fusion method is introduced to fuse visible and IR images for face authentication. The proposed fusion method relies on translation invariant À-trous wavelet transform and fractal dimension using differential box counting method. Five popular fusion metrics namely, ratio of spatial frequency error, normalized mutual information, edge information, universal image quality index, extended frequency comparison index are considered to measure the effectiveness of the proposed fusion algorithm quantitatively over four state-of-the-art methods. A new similarity measure is also proposed to check how close a fused face image from others are. All the experiments are performed on three databases namely, IRIS benchmark face database, UGC-JU face database and SCface face database. All the results depict that the proposed fusion method along with similarity measure for face authentication outperforms all the four state-of-the-art methods in terms of accuracy, precision and recall.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Susstrunk S (2010) SLIC Superpixels, EPFL technical Report 149300

  2. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34(11):2274–2282

    Google Scholar 

  3. Adini Y, Moses Y, Ullman S (1997) Face recognition: the problem of compensating for changes in illumination direction. IEEE Trans Pattern Anal Mach Intell 19(7):721–732

    Google Scholar 

  4. Ahonen T, Hadid A, Pietikainen M (2006) Face description with local binary patterns: Application to face recognition. IEEE Trans Pattern Anal Mach Intell 28 (12):2037–2041

    MATH  Google Scholar 

  5. Bebis G, Gyaourova A, Singh S, Pavlidis I (2006) Face recognition by fusing thermal infrared and visible imagery. Image Vis Comput 24(7):727–742

    Google Scholar 

  6. Ben-Arie J, Nandy D (1998) A volumetric/iconic frequency domain representation for objects with application for pose invariant face recognition. IEEE Trans Pattern Anal Mach Intell 20(5):449–457

    Google Scholar 

  7. Bhattacharjee D, Seal A, Ganguly S, Nasipuri M, Basu DK (2012) Comparative study of human thermal face recognition based on Haar wavelet transform and local binary pattern. Comput Intell Neurosci 2012:6

    Google Scholar 

  8. Bhowmik MK, Bhattacharjee D, Nasipuri M, Basu DK, Kundu M (2010) Fusion of wavelet coefficients from visual and thermal face images for human face recognition-a comparative study. arXiv:1007.0626

  9. Bhowmik MK, Bhattacharjee D, Nasipuri M, Basu DK, Kundu M (2010) Image pixel fusion for human face recognition. arXiv:1007.0628

  10. Chen J, Yi D, Yang J, Zhao G, Li SZ, Pietikainen M (2009) Learning mappings for face synthesis from near infrared to visual light images, IEEE Conference on Computer Vision and Pattern Recognition, pp 156–163

  11. Cormen TH, Leiserson CE, Rivest RL, Stein C (2009) Introduction to algorithms, 3rd edit

  12. Davis JW, Sharma V (2005) Fusion-based background-subtraction using contour saliency, IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops, pp 11

  13. Dutilleux Pierre (1990) An implementation of the “algorithme à trous” to compute the wavelet transform, Wavelets, pp 298–304

  14. Eskicioglu AM, Fisher PS (1995) Image quality measures and their performance. IEEE Trans Commun 43(12):2959–2965

    Google Scholar 

  15. Fowler JE (2005) The redundant discrete wavelet transform and additive noise. IEEE Signal Process Lett 12(9):629–632

    Google Scholar 

  16. Gonzalo C, Lillo-Saavedra M (2008) A directed search algorithm for setting the spectral–spatial quality trade-off of fused images by the wavelet à trous method. Can J Remote Sens 34(4):367–375

    Google Scholar 

  17. González-Audícana M, Otazu X, Fors O, Seco A (2005) Comparison between Mallat’s and the ’à trous’ discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images. Int J Remote Sens 26(3):595–614

    Google Scholar 

  18. Grgic M, Delac K, Grgic S (2011) SCface–surveillance cameras face database. Multimed Tools Appl 51(3):863–879

    Google Scholar 

  19. Hall DL, Llinas J (1997) An introduction to multisensor data fusion. Proc IEEE 85(1):6–23

    Google Scholar 

  20. Hermosilla G, Gallardo F, Farias G, Martin CS (2015) Fusion of visible and thermal descriptors using genetic algorithms for face recognition systems. Sensors 15(8):17944–17962

    Google Scholar 

  21. Hossny M, Nahavandi S, Creighton D (2008) Comments on ’Information measure for performance of image fusion’. Electron Lett 44(18):1066–1067

    Google Scholar 

  22. Jain AK, Ross A, Prabhakar S (2004) An introduction to biometric recognition. IEEE Trans Circuits Syst Video Technol 14(1):4–20

    Google Scholar 

  23. Keller JM, Chen S, Crownover RM (1989) Texture description and segmentation through fractal geometry. Computer Vision, Graphics, and image processing 45(2):150–166

    Google Scholar 

  24. Kokar M, Kim K H (1993) Review of multisensor data fusion architectures and techniques, IEEE International Symposium on Intelligent Control, pp 261–266

  25. Kong SG, Heo J, Boughorbel F, Zheng Y, Abidi BR, Koschan A, Yi M, Abidi MA (2007) Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition. Int J Comput Vis 71(2):215–233

    Google Scholar 

  26. Lei Z, Pietikäinen M, Li SZ (2014) Learning discriminant face descriptor. IEEE Trans Pattern Anal Mach Intell 36(2):289–302

    Google Scholar 

  27. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Google Scholar 

  28. Mandelbrot BB (1983) The fractal geometry of nature/Revised and enlarged edition. WH Freeman and Co., New York, p 495

    Google Scholar 

  29. Panigrahy C, Garcia-Pedrero A, Seal A, Rodríguez-Esparragón D, Mahato NK, Gonzalo-Martín C (2017) An Approximated Box Height for Differential-Box-Counting Method to Estimate Fractal Dimensions of Gray-Scale Images. Entropy 19(10):534

    MathSciNet  Google Scholar 

  30. Pavlidis I, Symosek P (2000) The imaging issue in an automatic face/disguise detection system, Proceedings. In: IEEE workshop on computer vision beyond the visible spectrum: Methods and applications, pp 15–24

  31. Peleg S, Naor J, Hartley R, Avnir D (1984) Multiple resolution texture analysis and classification. IEEE Trans Pattern Anal Mach Intell 6(4):518–523

    Google Scholar 

  32. Pentland AP (1984) Fractal-based description of natural scenes. IEEE Trans Pattern Anal Mach Intell 6(6):661–674

    Google Scholar 

  33. Piella G, Heijmans H (2003) A new quality metric for image fusion. International Conference on Image Processing 3:111–173

    Google Scholar 

  34. Qin X, Zheng J, Hu G, Wang J (2017) Multi-focus image fusion based on window empirical mode decomposition. Infrared Phys Technol 85:251–260

    Google Scholar 

  35. Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315

    Google Scholar 

  36. Ren X, Malik J (2003) Learning a classification model for segmentation, pp 10

  37. Rockinger O, Fechner T (1998) Pixel-level image fusion: the case of image sequences. Signal Processing, Sensor Fusion, and Target Recognition VII 3374:378–389

    Google Scholar 

  38. Sarkar N, Chaudhuri BB (1994) An efficient differential box-counting approach to compute fractal dimension of image. IEEE Trans Syst Man Cybern 24(1):115–120

    Google Scholar 

  39. Seal A, Bhattacharjee D, Nasipuri M (2016) Human face recognition using random forest based fusion of à-trous wavelet transform coefficients from thermal and visible images. AEU Int J Electron Commun 70(8):1041–1049

    Google Scholar 

  40. Seal A, Bhattacharjee D, Nasipuri M, Gonzalo-Martin C (2014) Robust thermal face recognition using region classifiers. Int J Pattern Recognit Artif Intell 28 (5):1456008

    Google Scholar 

  41. Seal A, Bhattacharjee D, Nasipuri M, Basu DKR (2015) UGC-JU face database and its benchmarking using linear regression classifier. Multimed Tools Appl 74(9):2913–2937

    Google Scholar 

  42. Seal A, Bhattacharjee D, Nasipuri M, Gonzalo-Martin C, Menasalvas E (2014) Histogram of bunched intensity values based thermal face recognition, Rough Sets and Intelligent Systems Paradigms, Springer, pp 367–374

    Google Scholar 

  43. Seal A, Bhattacharjee D, Nasipuri M, Gonzalo-Martin C, Menasalvas E (2017) Fusion of visible and thermal images using a directed search method for face recognition. Int J Pattern Recognit Artif Intell 31(4):1756005

    Google Scholar 

  44. Seal A, Bhattacharjee D, Nasipuri M, Rodríguez-Esparragón D, Menasalvas E, Gonzalo-Martin C (2018) PET-CT image fusion using random forest and à-trous wavelet transform. Int J Numer Methods Biomed Eng 34(3):2933

    MathSciNet  Google Scholar 

  45. Shensa MJ (1992) The discrete wavelet transform: wedding the a trous and Mallat algorithms. IEEE Trans Signal Process 40(10):2464–2482

    MATH  Google Scholar 

  46. Singh R, Vatsa M, Noore A (2008) Hierarchical fusion of multi-spectral face images for improved recognition performance. Information Fusion 9(2):200–210

    MATH  Google Scholar 

  47. Singh R, Vatsa M, Noore A (2008) Integrated multilevel image fusion and match score fusion of visible and infrared face images for robust face recognition. Pattern Recogn 41(3):880–893

    MATH  Google Scholar 

  48. Singh R, Vatsa M, Noore A (2009) Face recognition with disguise and single gallery images. Image Vis Comput 27(3):245–257

    Google Scholar 

  49. Stramaglia S, Bassez I, Faes L, Marinazzo D (2017) Multiscale Granger causality analysis by à trous wavelet transform, 7th IEEE international workshop on advances in sensors and interfaces, pp 25–28

  50. Tan X, Triggs B (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans Image Process 19(6):1635–1650

    MathSciNet  MATH  Google Scholar 

  51. Tazebay MV, Akansu AN (1995) Adaptive subband transforms in time-frequency excisers for DSSS communications systems. IEEE Trans Signal Process 43(11):2776–2782

    Google Scholar 

  52. Tian Y-I, Kanade T, Cohn JF (2001) Recognizing action units for facial expression analysis. IEEE Trans Pattern Anal Mach Intell 23(2):97–115

    Google Scholar 

  53. Wang N, Li Q, El-Latif AA Abd, Peng J, Niu X (2013) Multibiometric complex fusion for visible and thermal face images, International Journal of Signal Processing. Image Processing and Pattern Recognition 6(3):1–16

    Google Scholar 

  54. Wang Z, Bovik AC (2002) A universal image quality index, Wang, Zhou and Bovik, Alan C. IEEE Signal Process Lett 9(3):81–84

    Google Scholar 

  55. Xu Y, Li Z, Yang J, Zhang D (2017) A survey of dictionary learning algorithms for face recognition. IEEE Access 5:8502–8514

    Google Scholar 

  56. Xydeas C S, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309

    Google Scholar 

  57. Zheng Y, Elmaghraby AS, Frigui H (2006) Three-band MRI image fusion utilizing the wavelet-based method optimized with two quantitative fusion metrics, Medical Imaging 2006: Image Processing, vol 61440

Download references

Acknowledgments

Ayan Seal thank to Media Lab Asia, Ministry of Electronics and Information Technology, Government of India for providing young faculty research fellowship. Portions of the research in this paper use the SCface database of facial images. Credit is hereby given to the University of Zagreb, Faculty of Electrical Engineering and Computing for providing the database of facial images. We thank the anonymous reviewers for their many insightful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ayan Seal.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Seal, A., Panigrahy, C. Human authentication based on fusion of thermal and visible face images. Multimed Tools Appl 78, 30373–30395 (2019). https://doi.org/10.1007/s11042-019-7701-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-019-7701-6

Keywords

Navigation