Palmprint authentication using a symbolic representation of images
Introduction
Automated personal authentication using biometric features has been widely studied during the last two decades. Previous research efforts have made it possible to apply biometric systems to practical applications for security or commercial purposes. Biometric systems based on fingerprint recognition [1], face recognition [2], and iris recognition [3] have already been developed to a quite mature stage so that they can be applied to critical security applications such as the immigration control and the crime investigation.
Recently, a novel hand-based biometric feature, palmprint, has attracted an increasing amount of attention. Like any other biometric identifiers, palmprints are believed to have the critical properties of universality, uniqueness, permanence and collectability for personal authentication [4]. Palmprints have several advantages over other hand-based biometrics, such as fingerprint and hand geometry. Compared to fingertips, palms are larger in size and therefore are more robust to injuries and dirt. Also, low-resolution imaging can be employed in the palmprint recognition based on creases and palm lines, making it possible to perform real time preprocessing and feature extraction; and the cost of the capturing device can also be well controlled. Palmprint authentication is believed to be able to achieve the accuracy comparable to that of other hand-based biometric authentication technologies, including fingerprint [1] and hand geometry [5], [6].
Texture and palm lines are the most clearly observable palmprint features in low resolution (such as 100 dpi) images [7], and thus have attracted most research efforts. In texture based palmprint authentication approaches [7], [8], [9], [10], [11], [12], [13], [14], signal processing based texture analysis methods [15] are usually adopted. Typically, texture features are extracted by filtering the palmprint images using filters such as the Gabor filter [7], [11], [12], [13], the ordinal filter [10], or the wavelet [8]. The image filtering may be performed in either the spatial domain or the frequency domain [14].
Recently, a lot of automated palmprint authentication methods [16], [17], [18], [19] have focused on the palm line features, since they are more appealing than the texture for the human vision. In the offline method proposed in [16], the geometric shapes of the palm lines are extracted and approximated by straight-line segments. The slope, intercept and inclination of each segment are used as features for palmprint matching. C.C. Han et al. investigate the magnitude of palm lines in palmprint matching [17]. The latest related research reveals that the orientations of palm lines also contain strong discriminative power. Based on palm line orientations, a Competitive Code is designed for palmprint representation in [18]; and Y. Han et al. use local orientation histograms for describing palmprints [19]. Similar to the texture based methods; the palm line based methods usually employ image filtering for line feature extraction, leading to a high computational complexity. For example, in the Competitive Code method [18], six Gabor filters are applied to each palmprint ROI (Region Of Interest) for generating the corresponding orientation map. Suppose that the Palmprint ROI is 128 × 128 pixels and the Gabor filters are 35 × 35 in size, the overall MADD (Multiplication + Addition) operations required for one palmprint is around 120 million, leading to a very long processing time especially on slow mobile platforms. Experiments show that extracting the Competitive Code for one palmprint takes more than eight seconds on a state of the art PDA [20]. This is far too slow for a real-time biometric system. Besides the computational complexity, selecting appropriate filter parameters is also nontrivial in filtering based palmprint authentication methods. It has been demonstrated in [4], [7] that the authentication accuracy varies a lot when using different Gabor filter parameters, which need to be tuned in a try and error manner, indicating that the authentication performance will depend a lot on the training set used for parameter selection. This may account for the significant performance variations of different filtering based palmprint authentication methods on different databases [19].
In this paper, we propose a texture based approach for palmprint authentication, in which palmprint image grayscale information are directly adopted as features. The computational complexity of the feature extraction process is much lower than previous filtering based approaches, and thus can be implemented efficiently for even slow mobile embedded platforms. By extending the idea of SAX (Symbolic Aggregate approXimation) [21] in time series research to 2D images for palmprint representation and matching, the proposed method can achieve the authentication performance, in terms of EER (Equal Error Rate), comparable to the state of the art palmprint authentication methods.
The rest of this paper is organized as the follows. Section 2 explains the 2D extension of SAX for images. Section 3 describes the feature extraction and matching processes of the proposed approach. Experiments and results are elaborated in Section 4. The last section is a conclusion of our work.
Section snippets
2D SAX
As a symbolic representation of sequential data, SAX (Symbolic Aggregate approXimation) [21] has been verified as a simple but effective tool for solving most time series data mining problems such as clustering, classification, indexing, anomaly detection, and motif finding [22]. For a real valued data sequence, its SAX representation is obtained by first transforming it into the PAA (Piecewise Aggregate Approximation) representation; then predetermined breakpoints are used to discretize the
2D SAX conversion for palmprints
Two public palmprint databases are used for the experiments in this work [28], [29]. The PolyU Palmprint Database [28] contains 7752 grayscale palmprint images (384 × 284 pixels, 96 dpi) corresponding to 386 different palms. For each palm, there are around 20 samples collected in two sessions, where around 10 samples were captured in the first session and the second session, respectively. The average interval between the two sessions was 2 months. Palmprints in this database were captured using a
Experiments
A complete one to one matching experiment is performed on the PolyU palmprint database [28] to test the effectiveness of the proposed method. There are altogether 74068 genuine matching and 29968808 imposter matching. The experimental settings are as follows: is a 7 × 7 average filter; SAX_Length = 32 × 32 = 1024, and SAX_Level equals 4. Fig. 10 shows the score distributions of the genuine matching and the imposter matching, as well as the ROC (Receiver Operating Characteristic) curve. The EER of
Conclusions
The SAX conversion has been widely used in solving data mining problems for 1D time series because it is computationally efficient, is easy to use, and is able to achieve a satisfactory balance between dimensionality reduction and discriminative power retaining. In this paper, we propose a natural extension of the SAX representation, 2D SAX, for two-dimensional data such as 2D images. We apply this new representation to the problem of texture based palmprint authentication for testing its
Acknowledgments
The work described in this paper was substantially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. 415205). This work was also partially supported by a grant from Tsinghua University, China (Project No. 053207002).
Portions of the research in this paper use the CASIA Palmprint Image Database collected by the Chinese Academy of Sciences’ Institute of Automation (CASIA).
References (34)
A hand-based personal authentication using a coarse-to-fine strategy
Image and Vision Computing
(2004)- et al.
Palmprint identification using feature-level fusion
Pattern Recognition
(2006) - et al.
Palmprint recognition with improved two-dimensional locality preserving projections
Image and Vision Computing
(2008) - et al.
Two novel characteristics in palmprint verification: datum point invariance and line feature matching
Pattern Recognition
(1999) - et al.
Personal authentication using palmprint features
Pattern Recognition
(2003) - et al.
Handbook of Fingerprint Recognition
(2003) - et al.
Handbook of Face Recognition
(2004) - J. Daugman, Biometric Personal Identification System Based on Iris Analysis, U.S. Patent No. 5,291,560,...
Palmprint Authentication, Norwell, Mass
(2004)- et al.
Biometric identification through hand geometry measurements
IEEE Transactions on Pattern Analysis and Machine Intelligence
(2000)
Online palmprint identification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Wavelet based palmprint recognition
Proceedings of the International Conference on Machine Learning and Cybernetics
Texture-based palmprint retrieval using a layered search scheme for personal identification
IEEE Transactions on Multimedia
Ordinal palmprint representation for personal identification
Proceedings of IEEE Conference on Computer Vision and Pattern Recognition
Feature-level fusion for effective palmprint authentication
Proceedings of International Conference on Biometric Authentication
Palmprint identification by Fourier transform
International Journal on Pattern Recognition and Artificial Intelligence
Handbook of Pattern Recognition and Computer Vision
Cited by (52)
Variance-guided attention-based twin deep network for cross-spectral periocular recognition
2020, Image and Vision ComputingCitation Excerpt :Biometric recognition refers to recognizing individuals based on their unique behavioral patterns or physical attributes [1]. Some well known biometric traits include face, iris, ear, fingerprint, palm print, signature, gait etc. [2–6]. Ocular biometric has gained significant attention in recent years which has led the researchers to explore another biometric trait based on the periocular region [7].
A survey on minutiae-based palmprint feature representations, and a full analysis of palmprint feature representation role in latent identification performance
2019, Expert Systems with ApplicationsCitation Excerpt :The FMR and FNMR are commonly plotted in detection error trade-off (DET) curves (Gamassi et al., 2005) or in receiver operating characteristic (ROC) (Santafe, Inza, & Lozano, 2015) curves, which plot FMR vs. (1-FNMR) at different thresholds. As the palmprints are stored as images, there are palmprint identification methods that use techniques from the computer vision field in the area of feature extraction and matching algorithms (Bai, Gao, Zhang, & Zhang, 2017; Chen, Moon, Wong, & Su, 2010; Grover & Hanmandlu, 2018; Svoboda, Masci, & Bronstein, 2016; Zhang, Zuo, & Yue, 2012). For latent palmprint identification, the techniques used take as features those extracted using the scale invariant feature transform (SIFT) algorithm (Lowe, 2004) and the image’s Fourier transform (Ito, Nakajima, Kobayashi, Aoki, & Higuchi, 2004).
Stereo-based palmprint recognition in various 3D postures
2017, Expert Systems with ApplicationsCitation Excerpt :Detection of the ROI is one of the most important stages of palmprint recognition and affects recognition performance significantly. The ROI extraction process is generally carried out with the help of the valley points between fingers in the segmented images (Chen, Moon, Wong, & Su, 2010; Connie et al., 2005; Shang, Chen, Su, & Zhou, 2012; Zhang et al., 2003). The success of this method depends largely on the segmentation techniques and it may create problems with unrestricted background images.
Developing a contactless palmprint authentication system by introducing a novel ROI extraction method
2015, Image and Vision ComputingCitation Excerpt :Although some of the contactless systems include other biometrics or multi-spectral structures2 we only consider 2-D palmprint parts of them. In the comparisons, the papers utilized CASIA palmprint database were not taken into account despite they have lower performances (for example in Ref. [34] the EER is 0.9%). Because, CASIA database consists of images acquired from a peg-free but not a contact-free device.
LEARNING-FREE DEEP FEATURES FOR MULTISPECTRAL PALM-PRINT CLASSIFICATION
2023, Computer Science