Elsevier

Pattern Recognition

Volume 41, Issue 5, May 2008, Pages 1514-1527
Pattern Recognition

Person recognition by fusing palmprint and palm vein images based on “Laplacianpalm” representation

https://doi.org/10.1016/j.patcog.2007.10.021Get rights and content

Abstract

Unimodal analysis of palmprint and palm vein has been investigated for person recognition. One of the problems with unimodality is that the unimodal biometric is less accurate and vulnerable to spoofing, as the data can be imitated or forged. In this paper, we present a multimodal personal identification system using palmprint and palm vein images with their fusion applied at the image level. The palmprint and palm vein images are fused by a new edge-preserving and contrast-enhancing wavelet fusion method in which the modified multiscale edges of the palmprint and palm vein images are combined. We developed a fusion rule that enhances the discriminatory information in the images. Here, a novel palm representation, called “Laplacianpalm” feature, is extracted from the fused images by the locality preserving projections (LPP). Unlike the Eigenpalm approach, the “Laplacianpalm” finds an embedding that preserves local information and yields a palm space that best detects the essential manifold structure. We compare the proposed “Laplacianpalm” approach with the Fisherpalm and Eigenpalm methods on a large data set. Experimental results show that the proposed “Laplacianpalm” approach provides a better representation and achieves lower error rates in palm recognition. Furthermore, the proposed multimodal method outperforms any of its individual modality.

Introduction

Currently, much of the research efforts have been devoted to unimodal analysis of palmprint [1], [2], [3], [4] or palm vein [5], [6] identification. However, the identification performance of most of the unimodal methods is still not satisfactory. These methods have to contend with a variety of problems such as noisy data, intra-class variations, restricted degrees of freedom, non-universality, spoof attacks, and unacceptable error rates. A robust identification system may require fusion of several modalities. Ambiguities in one modality, such as poor illumination of palmprint may be compensated by another modality like vein features. Multimodal identification system hence promises to perform better than any one of its individual components. A key advantage of our fusion approach is that it gives better protection against spoof attacks because both palmprint and palm vein are required simultaneously by the system.

The information presented by multiple traits may be consolidated at various levels [7]: feature extraction level [8], matching score level [9], [10], [11] or decision level [11], [12]. At the feature extraction level fusion, the feature sets of multiple modalities are integrated (e.g. concatenating the two features into a new single feature) and a new feature set is generated and then used in the matching and decision-making modules of the biometric system. Feature reduction techniques, e.g. principal component analysis (PCA), may be employed to extract feature from the larger set of features. At the matching score level fusion, the matching scores output by multiple matchers are integrated. The matching score level fusion is the most common approach due to the ease in accessing and combining the scores generated by different matchers. Since the matching scores output by the various modalities are heterogeneous, score normalization [13] is needed to transform the scores into a common domain prior to combining them. At the decision level fusion, the final decisions made by the individual systems are consolidated by employing techniques such as majority voting. In general, the performance by the feature-level fusion can give better results than the other two schemes, e.g. in the works on face and fingerprint mosaicing [14], [15], [16]. There is the notion of image blending that is significant when biometric samples are being mosaiced. In a recent work, the eigenpalm and eigenfinger are fused at the matching score level for user identification [17].

In this paper, we proposed an alternative multimodal personal identification system where palmprint and palm vein images are combined into a single image at the image level. The motivation is that there are some unique features in both the palmprint image and palm vein image, respectively, that can be used for personal identification. The principal lines, wrinkles, ridges, minutiae points, singular point and texture are regards as useful features for palmprint and palm vein representation. Lines and textures are the most observable feature in low-resolution palmprint and palm vein images. Lines are more appealing than texture for the human vision. In this paper, a line-guided and contrast-enhanced image fusion is developed. A wavelet-based image fusion approach based on Mallat's wavelet is developed to fuse the palmprint and palm vein images. The lines can be retained well in the fused images.

The issue of how to represent the palm features for effective classification is still an open problem. Various palmprint representations have been proposed for palmprint recognition, including line [18], points [19], Fourier spectrum [20], Morphological [21], Texture [22], Wavelet signatures [23], Gabor [3], Fusion code [4] and competitive code [2]. Recently, Sun et al. [1] proposed to unify several state-of-the-art palmprint representations, such as competitive coding [2] and fusion code [4], by using ordinal features. They claimed their algorithm is a general framework for the most current palmprint representations, for example Zhang et al.'s Gabor-based representations [2], [3], [4], which reported the best recognition performance in literature. However, the theoretical foundation of the ordinal palmprint representation has not been formulated. Recently, the use of hand vein images has attracted increasing interest from both research communities [6], [24], [25], [26] and industries [5]. These features of the human hand, such as principal lines and palm veins, are relatively stable and the hand images, both color and infrared, can be acquired relatively easily.

In biometric applications, subspace learning methods play a dominant role in image representation. These methods have in common the property that they allow efficient characterization of a low-dimensional subspace within the overall space of raw image measurements. Once a low-dimensional representation of the target class (face, hand, etc.) has been obtained, then standard statistical methods can be used to learn the range of appearance that the target exhibits in the new, low-dimensional coordinate system. Because of the lower dimensionality, relatively few examples are required to produce a useful estimate of discriminant functions (or discriminant directions). Appearance-based representations of palmprint, such as eigenpalm (where features are extracted by the principal component analysis (PCA)) [17], Fisherpalm (where features are extracted by the linear discriminant analysis (LDA)) [27] and ICA (independent component analysis) [28], have been investigated. In this paper, a different appearance representation of a fused hand image is proposed. The feature, which we call “Laplacianpalm”, is extracted from the fused image by the locality preserving projection (LPP) [29]. While the Eigenpalm method aims to preserve the global structure of the image space and the Fisherpalm method aims to preserve the global discriminating information, our Laplacianpalm method aims to preserve the local structure of the image space. In many real world classification problems, the local manifold structure is more important than the global Euclidean structure, especially when instance-based classifiers, such as Bayes and k-nearest neighbors, are used for the classification. LPP potentially has discriminative power even though it is unsupervised. LPP has been successfully used in face recognition as well as image retrieval [29] but has never been used for palmprint or palm vein image recognition.

In this paper, the experimental results on a large database show that the LPP outperforms either the LDA or PCA. Furthermore, the proposed multimodal method outperforms any of its individual modality.

The rest of the paper is organized as follows: Section 2 discusses the system and image fusion. The extraction of the region-of-interest is discussed in Section 3. A feature extraction scheme is detailed in Section 4. Section 5 reports our experimental results. The conclusion is presented in Section 6.

Section snippets

Fusion of palmprint and palm vein images

A color camera and a JAI CV-M50IR 1/2 CCD near-IR Camera are used to capture the palmprint image and palm vein image respectively. The setup of the system is shown in Fig. 1. The two cameras are fixed mounted and upwards on a fixture. Since the NIR camera is not sensitive enough to detect the IR radiation emitted by the human body (3000–12000 nm) an IR light source [30] is used to irradiate the palm. The user puts his hand on a plane glass in front of the two cameras with the fingers spread

Region-of-interest (ROI) extraction

After the two images have been fused, the resulting image has to undergo a ROI extraction process to prepare it for the feature representation/extraction stage. The ROI extraction process can be executed on any one of the palmprint image, palm vein image and the fused image because they have been fully registered in Section 2. For our approach, the palm vein images are used for this purpose. We define the central part of the palmprint and palm vein images as the ROI.

It is necessary to set up a

The “Laplacianpalm” representation

For most pattern recognition problems, selecting an appropriate representation to extract the most significant features is crucial. In the context of the appearance-based paradigm for object recognition, PCA has been widely adopted to extract features with the highest covariant for dimensionality reduction. But the features extracted by PCA are actually “global” features for all pattern classes; thus they are not necessarily good representations for discriminating one class from the others.

Experimental results

We evaluate the proposed algorithm using two databases (training and testing databases), which was collected in our Institute over a period of six months. The training database includes 120 subjects, with three triples of images (three palmprint images, three palm vein images and three fused images) for each subject. The testing database includes the images of the same 120 subjects, with three triples of images (three palmprint images, three palm vein images and three fused images) for each

Conclusions and future work

We have presented a multimodal biometric system that recognizes fused palm images of the palmprints and palm vein patterns captured using a color and a near infrared camera respectively. There are two major contributions in this paper:

Firstly, we proposed an identity recognition approach by combining palmprint and palm vein images at the image level. To the best of our knowledge, we are the first to fuse palmprint and palm vein at the image level for identity recognition. We developed a novel

Acknowledgments

The authors would like to thank Kar-Ann Toh, Wei Xiong and Jiquan Ngiam for their assistance in image collection. The authors are also most grateful for the constructive advice and comments from the anonymous associate editor and reviewers.

About the AuthorJIAN-GANG WANG received the Bachelor degree in Computer Science from the Inner Mongolia University in 1985. He received the M.E. degree in Pattern Recognition & Machine Intelligence from the Shenyang Institute of Automation, Chinese Academy of Sciences in 1988 and the Ph.D. degree in Computer Vision from the Nanyang Technological University in 2001. His Ph.D. thesis is on head pose and eye gaze estimation for human–machine interaction.

From 1988 to 1997, he was with the Robotics

References (46)

  • Z. Sun, T. Tan, Y. Wang, S.Z. Li, Ordinal palmprint representation for personal identification, in: Proceedings of IEEE...
  • W.K. Kong, D. Zhang, Competitive coding scheme for palmprint verification, in: Proceedings of the 17th International...
  • D. Zhang et al.

    Online palmprint identification

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2003)
  • W.K. Kong, D. Zhang, Feature-level fusion for effective palmprint authentication, in: Proceedings of the First...
  • Fujitsu-Laboratories-Ltd, Fujitsu Laboratories Develops Technology for World's First Contactless Palm Vein Pattern...
  • L. Wang, G. Leedham, A thermal hand vein pattern verification system, in: Proceedings of International Conference on...
  • A.K. Jain et al.

    Multibiometric systems

    Commun. ACM

    (2004)
  • N. Ratha, J. Connell, R. Bolle, Image mosaicing for rolled fingerprint construction, in: Proceedings of 14th...
  • K. Choi, H. Choi, J. Kim, Fingerprint mosaicking by rolling and sliding, in: Proceedings of Audio- and Video-based...
  • R. Singh, M. Vatsa, A. Ross, A. Noore, Performance enhancement of 2d face recognition via mosaicing, in: Proceedings of...
  • S. Ribaric et al.

    A biometric identification systems based on eigenpalm and eigenfinger features

    Trans. Pattern Anal. Mach. Intell.

    (2005)
  • N. Duta et al.

    Matching of palmprint

    Pattern Recognition Lett.

    (2001)
  • W. Li et al.

    Palmprint identification by Fourier transform

    Int. J. Pattern Recognition Artif. Intell.

    (2002)
  • Cited by (244)

    • Graph based secure cancelable palm vein biometrics

      2021, Journal of Information Security and Applications
    • Anti-spoofing method for fingerprint recognition using patch based deep learning machine

      2020, Engineering Science and Technology, an International Journal
    View all citing articles on Scopus

    About the AuthorJIAN-GANG WANG received the Bachelor degree in Computer Science from the Inner Mongolia University in 1985. He received the M.E. degree in Pattern Recognition & Machine Intelligence from the Shenyang Institute of Automation, Chinese Academy of Sciences in 1988 and the Ph.D. degree in Computer Vision from the Nanyang Technological University in 2001. His Ph.D. thesis is on head pose and eye gaze estimation for human–machine interaction.

    From 1988 to 1997, he was with the Robotics Laboratory at the Chinese Academy of Sciences, where he was appointed as an Associate Professor in 1995. During the academic year 1997 to 1998, he was a Research Assistant at the Department of Manufacture Engineering and Engineering Management, City University of Hong Kong. He joined the Centre for Signal Processing at the Nanyang Technological University as a Research Fellow in 2001. He is presently working as a Research Scientist with the Institute for Infocomm Research, Singapore. His current research interests include computer vision, machine learning, biometrics and human–computer interaction. He has contributed more than 40 articles in peer-reviewed books, journals and conferences in these areas. He is a frequent reviewer for the IEEE and other international journals and conferences. He is a member of the Technical Committee of International Association of Science and Technology for Development (IASTED) and a member of the IEEE. He serviced program committee of the British Machine Vision Conference 2006 (BMVC2006) and BMVC2007. Dr. Wang is listed in the prestigious Marquis “Who's Who in the World”, 25th Silver Anniversary Edition 2008.

    About the AuthorWEI-YUN YAU received his B.E. (Electrical) from the National University of Singapore (1992), the M.E. degree in Biomedical Image Processing (1995) and Ph.D. degree in Computer Vision (1999) from the Nanyang Technological University. From 1997 to 2002, he was a Research Engineer and then Program Manager at the Centre for Signal Processing, Singapore leading the research and development effort in biometrics signal processing. His team won the top three positions in both speed and accuracy at the International Fingerprint Verification Competition 2000 (FVC2000). Wei-Yun served as the Program Director of the Biometrics Enabled Mobile Commerce (BEAM) Consortium from 2001 to 2002. Currently, he holds concurrent position as Assistant Professor at the Nanyang Technological University and as Programme Manager with the Institute for Infocomm Research, leading the research in Next Generation Content for IPTV. He is also currently the Chair of the Biometrics Technical Committee, Singapore; Chair of the Asian Biometric Forum and Project Editor of the international standards ISO/IEC JTC1 SC37 29794-4 on fingerprint quality score normalization and co-editor for ISO/IEC JTC1 SC37 29794-1. Wei-Yun is also the recipient of the TEC Innovator Award 2002, the Tan Kah Kee Young Inventors’ Award 2003 (Merit), Standards Council Merit Award 2005 and IES Prestigious Engineering Achievement Awards 2006. His research interest includes biometrics, video understanding, computer vision and intelligent systems and has published widely, with three patents granted and 70 publications in these areas.

    About the AuthorANDY SUWANDY obtained his Bachelor degree (Honours) in Electrical and Computer Systems Engineering from Monash University in 2005. Currently, he is working at Institute for Infocomm Research as Research Officer in the biometrics group.

    About the AuthorERIC SUNG graduated from the University of Singapore with a B.E. (Honours Class 1) in 1971 and then obtained his MSEE in 1973 from the University of Wisconsin. He Lectured in the Electrical Engineering Department of the Singapore Polytechnic from 1973 to 1978. Subjects taught include Control Engineering and Industrial Electronics. In 1975, he was sent on a one-year industrial attachment at the Singapore Senoko Power Station. From June 1978 till April 1985, Eric Sung worked in design laboratories in multinational organisations such as Philips (Video), Luxor and King Radio Corporation designing television and microprocessor-based communication products.

    Joining Nanyang Technological University in April 1985, he is presently an Associate Professor in the Division of Control and Instrumentation of the School of Electrical and Electronic Engineering. He spent his sabbatical in the Computer Science Department of Monash University in 1992–1993 and in the HCII center at CMU in 2002. His Ph.D. thesis is on structure from motion from image sequences. His current research interests are in structure from motion, stereovision, face and facial expression recognition and machine learning and have published over 90 papers in journals and international conferences.

    View full text