Elsevier

Pattern Recognition

Volume 41, Issue 5, May 2008, Pages 1504-1513
Pattern Recognition

Palmprint verification based on robust line orientation code

https://doi.org/10.1016/j.patcog.2007.10.011Get rights and content

Abstract

In this paper, we propose a novel robust line orientation code for palmprint verification, whose performance is improved by using three strategies. Firstly, a modified finite Radon transform (MFRAT) is proposed, which can extract the orientation feature of palmprint more accurately and solve the problem of sub-sampling better. Secondly, we construct an enlarged training set to solve the problem of large rotations caused by imperfect preprocessing. Finally, a matching algorithm based on pixel-to-area comparison has been designed, which has better fault tolerant ability. The experimental results of verification on Hong Kong Polytechnic University Palmprint Database show that the proposed approach has higher recognition rate and faster processing speed.

Introduction

In information society, biometrics technique is an important and effective solution for automatic personal verification/identification. Recent years, as a new biometrics technique, palmprint based verification/identification system (PVS) has been receiving wide attention from researchers [1]. So far, there have been many approaches proposed for PVS. Among them, texture, line, appearance and multi-features based approaches have been studied extensively, which have shown good recognition rate reported in many literatures [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13]. Zhang and Kong et al. proposed an approach based on texture called as PalmCode for palmprint verification/identification, which exploited zero-crossing information on a palmprint image by using Gabor filter [1]. Subsequently, Kong et al. used fusion rule at feature layer to further improve PalmCode. And this approach was named as FusionCode [2]. Wu et al. proposed a palm lines based approach in which the palm lines were regarded as a kind of roof edge, and extracted according to the zero-cross points of lines’ first-order derivative and the magnitude of second derivative [3]. Zhang and Zhang, Han et al. and Lin et al. proposed several line-like features of palmprint for personal verification/identification, respectively [4], [5], [6]. Moreover, some important subspace methods such as principal component analysis (PCA), linear discriminant analysis (LDA), independent component analysis (ICA), locality preserving projections (LPP), etc., were also exploited for PVS [7], [8], [9], [10], [11], [12]. Additionally, it was reported in Ref. [13] that multiple feature based approaches using information fusion technology could provide more reliable results.

Recently, orientation codes of palmprint are deemed to be the most promising methods, since the orientation feature contains more discriminative power than other features, and is more robust for the change of illumination. Kong and Zhang were the first authors, who investigated the orientation information of the palm lines for PVS. And their approach was defined as Competitive Code [14], [15]. In this approach, six Gabor filters with different directions were firstly applied to a preprocessed palmprint image. Next, orientation information was extracted by using winner-take-all rule. At last, the matching score between two palmprints was calculated by using angular distance. Wu et al. proposed another approach based on orientation named as palmprint orientation code (POC), which adopted several directional templates to define the orientation of each pixel [16]. However, two methods mentioned above are not very satisfying in some aspects. Firstly, Gabor filter used in Kong's method might not be the best tool to detect the orientations of palm lines. Secondly, the sub-sampling problem was not solved well in both methods. For instance, only the codes of pixels at (4i,4j) (i=0,,31,j=0,,31) were computed [16]. Finally, directional templates of Wu's method were roughly designed.

In this paper, we focus on investigating how to design a better orientation code for palmprint verification. Thus, a novel scheme referred to as robust line orientation code (RLOC) has been proposed, which can be regarded as an improved version of Competitive Code. In feature extraction stage, since Radon transform is a powerful tool to detect the directions of line feature, one of its variation named as the modified finite Radon transform (MFRAT) is proposed, which can extract the orientation feature of palmprint more accurately and solve sub-sampling problem better. In matching stage, two strategies are adopted to improve the robustness of matching. The first one is that an enlarged training set is established to overcome large rotations problem caused by imperfect preprocessing. Another one is that a matching algorithm based on pixel-to-area comparison has been designed, which has better fault tolerant ability for slight translations, rotations and distortions. The experiments conducted on Hong Kong Polytechnic University Palmprint Database show that the proposed scheme achieves exciting results in terms of recognition rate and processing speed.

The remainder of the paper is organized as follows. Section 2 presents the method of orientation feature extraction based on the MFRAT. Section 3 describes the matching algorithm based on pixel-to-area comparison. Section 4 reports the experimental results including verification and computational time. Section 5 concludes the whole paper.

Section snippets

Feature extraction using the MFRAT

The Radon transform in euclidean space was first established by Johann Radon in 1917 [17]. The Radon transform accentuates linear features by integrating image intensity along all possible lines in an image; thus it can be used to detect linear trends in the image. Afterwards, the finite Radon transform (FRAT) was proposed by Matus and Flusser, which is an effective way to perform Radon transform for finite length signals [18]. FRAT is defined as the summation of image pixels over a certain set

Palmprint matching based on pixel-to-area comparison

In PalmCode [1] and FusionCode [2], the normalized Hamming distance was used to calculate similarity degree between a test image and a training image, while the angular distance was used in Competitive Code [14] However, Hamming distance and angular distance based on pixel-to-pixel comparison are not very robust, since it is very difficult to obtain perfect superposition between two palmprint images that come from a same palm. In this section, we devise a matching algorithm based on

Database

The proposed approach in this paper was tested on the Hong Kong Polytechnic University (PolyU) Palmprint Database, which can be downloaded from website [20]. The PolyU Palmprint Database contains 7752 grayscale images in BMP image format, corresponding to 386 different palms. In this database, around 20 samples from each of these palms were collected in two sessions, where around 10 samples were captured in the first session and the second session, respectively. The average interval between the

Conclusions

In this paper, we proposed a robust line orientation code for palmprint verification, which has several obvious advantages. The first one is that the MFRAT can extract the orientation feature more accurately and solve the problem of sub-sampling better. The second one is that the use of an enlarged training set can overcome large rotations’ problem well. Next, the designed pixel-to-area comparison has better fault tolerant ability The fourth advantage is that faster speed is obtained in feature

Acknowledgements

The authors are most grateful for the constructive advice and comments from the anonymous reviews. This work was supported by the grants of the National Science Foundation of China, Nos. 60705007 & 60472111, the grant from the National Basic Research Program of China (973 Program), No. 2007CB311002, the grants from the National High Technology Research and Development Program of China (863 Program), Nos. 2007AA01Z167. The work is partially supported by the CERG fund from the HKSAR Government,

About the AuthorWEI JIA received the B.Sc. degree in informatics from Center of China Normal University, Wuhan, China, in 1998, the M.Sc. degree in computer science from Hefei University of technology, Hefei, China, in 2004. He is currently a Ph.D. student in the Department of Automation at the University of Science and Technology of China. And from June 2007, he is a research assistant in Department of Computing, Biometrics Research Centre, Hong Kong Polytechnic University. His research

References (22)

  • X.Q. Wu et al.

    Palm line extraction and matching for personal authentication

    IEEE Trans. Syst. Man Cybern. A

    (2006)
  • Cited by (417)

    View all citing articles on Scopus

    About the AuthorWEI JIA received the B.Sc. degree in informatics from Center of China Normal University, Wuhan, China, in 1998, the M.Sc. degree in computer science from Hefei University of technology, Hefei, China, in 2004. He is currently a Ph.D. student in the Department of Automation at the University of Science and Technology of China. And from June 2007, he is a research assistant in Department of Computing, Biometrics Research Centre, Hong Kong Polytechnic University. His research interests include palmprint recognition, pattern recognition and image processing.

    About the AuthorDE-SHUANG HUANG received the B.Sc. degree in electronic engineering from the Institute of Electronic Engineering, Hefei, China, in 1986, the M.Sc. degree in electronic engineering from the National Defense University of Science and Technology, Changsha, China, in 1989 and the Ph.D. degree in electronic engineering from Xidian University, Xian, China, in 1993. From 1993 to 1997, he was a Postdoctoral Student at the Beijing Institute of Technology, Beijing, China, and the National Key Laboratory of Pattern Recognition, Chinese Academy of Sciences (CAS), Beijing. In 2000, he was a professor, and joined the Institute of Intelligent Machines, CAS, as a member of the Hundred Talents Program of CAS. He had published over 190 papers and, in 1996, published a book entitled Systematic Theory of Neural Networks for Pattern Recognition. His research interests include pattern recognition, machine leaning, bioinformatics and image processing.

    About the AuthorDAVID ZHANG graduated in computer science from Peking University. He received his M.Sc. in computer science in 1982 and his Ph.D. in 1985 from the Harbin Institute of Technology (HIT). From 1986 to 1988 he was a postdoctoral fellow at Tsinghua University and then an associate professor at the Academia Sinica, Beijing. In 1994 he received his second Ph.D. in electrical and computer engineering from the University of Waterloo, Ontario, Canada. Currently, he is a Chair Professor at the Hong Kong Polytechnic University where he is the Founding Director of the Biometrics Technology Centre (UGC/CRC) supported by the Hong Kong SAR Government. He also serves as Adjunct Professor in Tsinghua University, Shanghai Jiao Tong University, Beihang University, Harbin Institute of Technology and the University of Waterloo. He is the Founder and Editor-in-Chief, International Journal of Image and Graphics (IJIG); Book Editor, Springer International Series on Biometrics (KISB); Organizer, the International Conference on Biometrics Authentication (ICBA); associate editor of more than 10 international journals including IEEE Transactions on SMC-A/SMC-C/Pattern Recognition; Technical Committee Chair of IEEE CIS and the author of more than 10 books and 160 journal papers. Professor Zhang is a Croucher Senior Research Fellow, Distinguished Speaker of the IEEE Computer Society, and a Fellow of the International Association of Pattern Recognition (IAPR).

    View full text