Elsevier

Expert Systems with Applications

Volume 64, 1 December 2016, Pages 618-632
Expert Systems with Applications

Finger-vein recognition based on dual-sliding window localization and pseudo-elliptical transformer

https://doi.org/10.1016/j.eswa.2016.08.031Get rights and content

Highlights

  • A new dual-sliding window model is designed to locate the phalangeal joint.

  • A new pseudo-elliptical sampling model is proposed to transform the finger-vein image.

  • The first model is robust to light illumination and guarantee stable ROI extraction.

  • The next one reduces the image differences of the same finger at different captures.

  • These models were tested on the databases: SDUMLA-HMT, THU-FVFDT3 and FV_USM.

Abstract

Uneven illumination occurs during finger imaging because of the influence of several factors, including the position and posture of the finger, the uniformity of near-infrared light, and the influence of ambient light. Existing phalangeal joint locating methods are sensitive to light illumination and cannot locate phalangeal joint stably. In this study, we propose a dual-sliding window model to accurately detect the position of the phalangeal joint of the finger-vein image, which is robust to light illumination, and to extract a more stable region of interest. Planar imaging generates different finger-vein images of the same finger at different acquisitions by space rotation of the finger. Thus, a pseudo-elliptical sampling model is proposed to retain the spatial distribution of vein patterns, to reduce the redundant information in finger images, and to reduce differences. Finally, a two-dimensional principal component analysis is used to project the transformed image for feature extraction. We calculated the Euclidean distance to measure the similarity between the test and training samples. Experiments in the three different databases show that the proposed method is effective and reliable and improves the performance of a finger-vein identification system.

Introduction

Personal identification technology using vein patterns, such as palm-vein and finger-vein, has been the focus of increasing attention (Khellat-Kihel et al., 2016, Kumar and Zhou, 2011, Matsuda et al., 2016, Sierro et al., 2015, Xie et al., 2015). Compared with traditional biometrics technology, vein pattern has the advantage of being contactless, live body identification, and high security. The finger-vein capture device can be more miniaturized and convenient, which ensures broad application of this biometric technology.

A typical finger-vein identification system consists of four steps, including image capturing, preprocessing (region of interest (ROI) and image enhancement), feature extracting, and matching. Among these steps, ROI extraction plays a critical role in an automatic finger-vein identification system, which directly affects the accuracy of finger-vein recognition (Kumar & Zhou, 2011; Yang et al., 2015, Yu et al., 2013).

However, many factors influence the quality and consistency of the finger-vein image, such as position and posture of the finger, uniformity of illumination under near-infrared (NIR) light, and influence of ambient light in the natural environment. These factors may lead to uneven illumination based on our experience in the study of palm-vein image acquisition devices (Liu et al., 2015, Zhou et al., 2014). The ROI method should be robust and insensitive to the aforementioned factors. For convenience, uneven illumination caused by these factors is collectively referred to as interference of light in this study.

Several finger-vein ROI extraction methods have been proposed in the literature (Asaari et al., 2014, Raghavendra and Busch, 2015; Yang & Shi, 2012; Yang, Yang, Yin, & Xiao, 2013; Yang et al., 2015, Yu et al., 2013, Zuo et al., 2013). The existing methods can be classified into two types. One type is device positioning, which usually constrains the placement of the user's fingertip and finger-root by using the capture device. The middle region of the finger is acquired and used as the ROI image through cutting (Vlachos & Dermatas, 2015; Yang et al., 2015). This type would not be considered in this study, because it relies heavily on the finger-vein capture device and limits the practicality of the recognition systems. While the ROI method in this paper (introduced in next paragraph) does not rely much on the capture device and provides more flexibility to the user.

The other type is algorithm positioning, which obtains the ROI area by using a certain algorithm. The more effective methods are fingertip or finger-root location (Asaari et al., 2014; Yang et al., 2015, Zuo et al., 2013) and phalangeal joint location (Yang & Shi, 2012; Yang et al., 2013, Yu et al., 2013). For the fingertip or finger-root location method, the entire finger with fingertip and finger-root should be acquired, which limits its use. Thus, the phalangeal joint location method is more common and effective.

Most methods are sensitive to finger position variation (finger plane shift and rotation, as shown in Figs. 1(b) and (c)) in practice. Yang and Shi (2012) proposed a ROI localization method based on the phalangeal joint of the human fingers to overcome this problem. A predefined window with a fixed height and width is used to locate a sub-region in the finger-vein imaging plane. Then, the gray values at each column image are accumulated in the sub-region. Finally, the maximum column-sum is identified to denote the position of the distal inter-phalangeal joint (hereinafter referred to as phalangeal joint) approximately. However, the phalangeal joint may not always be located in the column with higher gray value of the finger-vein image because of the effect of light illumination (Yang et al., 2013). Yang et al. (2013) improved the phalangeal joint detection method described in Yang and Shi (2012) by using a sliding window. They proposed a sliding window with a fixed size column by column from the left to the right of the key area of the finger-vein image and calculated the sum of the gray value in the sliding window. The position corresponding to the sliding window with the maximum gray value is estimated as the phalangeal joint in the finger-vein image. This method derives the sum of gray values in the sliding window, which is equivalent to smooth gray cumulative curves and is more effective than the method in Yang and Shi (2012). However, the aforementioned two methods are susceptible to light illumination (gray value difference, as shown in Fig. 1(a)). The aforementioned two methods cannot obtain the stable position of the phalangeal joint because of light interference. This study proposes a novel phalangeal joint localization method based on a dual-sliding window, which can effectively resist light interference. Thus, the ROI extraction area is stable.

The distribution of vein patterns in the fingers of humans is in the three-dimensional (3D) space, but is imaged to the plane by a camera under the illumination of NIR light. Therefore, the finger-vein image is in the two-dimensional (2D) space, which indicates that it is a plane image. As such, the distribution of the finger-vein pattern is flat. If the finger undergoes space rotation while being captured, then different plane finger-vein images of the same finger at different acquisitions are generated, as shown in Fig. 1(d). The larger the rotation angle is, the larger the difference. As such, the recognition accuracy decreases. Several methods are used to transform 2D plane images to maintain the space distribution of the finger-vein pattern and to reduce the difference caused by the space rotation of the finger. Huang, Dai, Li, Tang, and Li (2010) proposed a pattern normalization model to correct the distortion caused by the position of the finger. However, the nearest sampling method in Huang et al. (2010) may repeatedly sample at the same pixel point, increasing the amount of redundant information and decreasing the feature extraction speed. Huang, Liu, and Li (2012) proposed a method to reconstruct a 3D normalized finger model from 2D images, which mapped the 2D finger-vein images into a new 2D coordinate system. Then, the similarity between the probe and gallery images is determined by calculating the Hamming distance. However, the reconstruction method in Huang et al. (2012) is complex and needs several hardware parameters of the capture device (such as the view of the camera or the distance from lens focus to image plane), and considerable redundant information is obtained during normalization of the vein image.

In addition, the other biometrics systems such as fingerprint and human face recognition used an elliptical model or sampling model to overcome 2D distortion (An and Chung, 2008, Zhao et al., 2011). An and Chung (2008) used an elliptical model to estimate a current pose of face for registration with the canonical frontal view image. But it is difficult to accomplish in finger-vein recognition because the rotation angle of finger is hard to be determined. In the fingerprint recognition system (Zhao et al., 2011), they proposed a 3D sampling technology such as direct sampling, cylinder model, and tube model. Each slice of the fingerprint is mapped to a 2D plane, which outputs a 2D equivalent fingerprint image. This method incorporates distortion into the unrolling process and improves the performance of the system compared with the traditional contact-based 2D fingerprints.

Thus, we propose a pseudo-elliptical sampling model to transform the enhanced finger-vein image. Transformation can restore the space distribution of the finger-vein pattern and benefit the feature extraction process. Consequently, the recognition accuracy is improved.

The flowchart of the finger-vein identification approach is shown in Fig. 2. First, we segmented the finger region from the background of the finger-vein image using the finger contour extraction (Fig. 3(b)) and correct the image by estimating the rotational angle of the finger (Fig. 3(c)), the detail can be seen in literature (Yu et al., 2013), and resized to the same size of 128 × 320 as the normalized image using bicubic interpolation (Gonzalez, Woods, & Eddins, 2007), as shown in Fig. 3(d). Then, a dual-sliding window is used to detect the position of the phalangeal joint and to extract a stable ROI. After enhancement, a pseudo-elliptical sampling model is used to transform the enhanced image to restore the spatial information of the finger-vein. Finally, the feature matrix is identified by projecting the transformed image using 2D principal component analysis (2DPCA) and by calculating the Euclidean distance of the feature matrix between test and training samples to achieve a match.

The main contributions of this study are as follows:

  • (1)

    A new dual-sliding window model is designed to detect the position of the phalangeal joint correctly and stably. This model is robust to light illumination, exhibits better performance compared with previous phalangeal joint location methods presented in the literature, and guarantees the stability of ROI extraction.

  • (2)

    A new pseudo-elliptical sampling model is proposed to transform the finger-vein image and, to a certain extent, reduce the image differences of the same finger at different acquisitions caused by the space rotation of the finger.

The rest of this paper is organized as follows: Section 2 discusses in detail the dual-sliding window used to locate the phalangeal joint. Section 3 introduces the enhancement method. Section 4 describes in detail the pseudo-elliptical sampling model. Section 5 presents and discusses the experimental results. Section 6 summarizes this study.

Section snippets

Dual-sliding window model

In the finger-vein image, the phalangeal joint of the finger provides useful information because it allows more light to penetrate when a NIR LED array is placed over a finger, and a brighter region may exit in the image plane ( Yang & Shi, 2012). As such, we can employ the high brightness peak value of a gray image to detect the position of the phalangeal joint.

Enhancement

Since the finger-vein images are often blurred and low contrast due to light scattering in the biological tissues (Gupta and Gupta, 2015, Syarif et al., 2016), therefore enhancing venous regions effectively is favorable for the following feature extraction. The enhancement method in this study is briefly introduced as follows.

First, a hyperbolic tangent function is used instead of a sigmoidal function in Sinthanayothin, Boyce, Cook, and Williamson (1999) to enhance the contrast of the ROI. We

Pseudo-elliptical sampling model

The finger-vein image acquisition and feature extraction processes are shown in Fig. 7. We assume that the finger is an ellipsoid, with its cross-section approximately an ellipse. When the image is collected by the camera in NIR light, 2D plane vein images can be obtained. Thus, the distribution of the finger-vein pattern is flat, which not only ignores partial spatial information of the finger-vein but also contains useless redundant information. If the finger undergoes space rotation while

Experimental results and discussion

Evaluation experiments are conducted on three different databases to verify the performance of the dual-sliding window model, the pseudo-elliptical sampling model, and the proposed identification method.

The first database is the SDUMLA-HMT Finger Vein Database (Yin, Liu, & Sun, 2011), in which 3816 images were acquired from 106 subjects, including 61 males and 45 females. The finger-vein images were repeatedly collected six times from each of the six fingers (index, middle, and ring fingers of

Conclusion

In this study, the dual-sliding window model for locating the phalangeal joint is proposed to extract a more stable ROI, which can overcome the influence of uneven illumination, to solve the problem of light sensitivity in previous phalangeal joint location methods presented in the literature. Furthermore, the pseudo-elliptical sampling model is proposed to transform the enhanced image and to retain the space priorities and spatial distribution characteristics of vein patterns in the finger

Author contributions

Conceived and designed the experiments: SRQ YQL YJZ. Performed the experiments: SRQ YQL. Analyzed the data: YQL SRQ YJZ JH. Contributed reagents/materials/analysis tools: YQL SRQ YJZ JH. Contributed to the writing of the manuscript: YQL SRQ YXN.

Conflicts of interest

The authors declare no conflict of interest.

Acknowledgments

The authors would like to sincerely thank Shandong University for providing the Finger Vein Database (SDUMLA-HMT), the Finger Vein and Finger Dorsal Texture Database 3 collected by Tsinghua University (THU-FVFDT3) used in this work, and Finger Vein USM Database (FV_USM) by Universiti Sains Malaysia.

This work was supported in by the research project of Guangdong Province under Grant No. 2013B090500104 and the major scientific and technological projects of Guangdong Province under Grant No.

References (33)

  • B. Huang et al.

    Finger-vein authentication based on wide line detector and pattern normalization

  • B. Huang et al.

    A finger posture change correction method for finger-vein recognition

  • A. Kumar et al.

    Human identification using finger images

    IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society

    (2011)
  • Y. Liu et al.

    Real-time locating method for palmvein image acquisition

    Image and graphics

    (2015)
  • Y. Matsuda et al.

    Finger-vein authentication based on deformation-tolerant feature-point matching

    Machine Vision and Applications

    (2016)
  • M. Peters et al.

    Finger length and distal finger extent patterns in humans

    American Journal of Physical Anthropology

    (2002)
  • Cited by (90)

    • Finger vein identification using deeply-fused Convolutional Neural Network

      2022, Journal of King Saud University - Computer and Information Sciences
    • Survey of Application of Deep Learning in Finger Vein Recognition

      2023, Journal of Frontiers of Computer Science and Technology
    View all citing articles on Scopus
    1

    Co-first authors

    View full text