Elsevier

Information Sciences

Volume 562, July 2021, Pages 1-12
Information Sciences

Jointly learning multi-instance hand-based biometric descriptor

https://doi.org/10.1016/j.ins.2021.01.086Get rights and content

Abstract

Multibiometric recognition has become one of the most important solutions for enhancing overall personal recognition performance due to several inherent limitations of unimodal biometrics, such as nonuniversality and unacceptable reliability. However, most existing multibiometrics fuse completely different biometric traits based on addition schemes, which usually require several sensors and make the final feature sets large. In this paper, we propose a joint multi-instance hand-based biometric feature learning method for biometric recognition. Specifically, we first exploit the important direction data from multi-instance biometric images. Then, we simultaneously learn the discriminative features of multi-instance biometric traits and exploit the collaborative representations of multi-instance biometric features such that the final joint multi-instance feature descriptor is compact. Moreover, the importance weights of different biometric instances can be adaptively learned. Experimental results on the baseline multi-instance finger-knuckle-print and palmprint databases demonstrate the promising effectiveness of the proposed method.

Introduction

Biometric recognition has become one of the most important and effective personal authentication technologies due to its superior user acceptance over conventional token- and password-based authentication mechanisms [33], [43]. However, unimodal biometrics, which is based on a single modality, suffer from several inherent problems, such as nonuniversality, intraclass variations and unacceptable reliability [13], [20]. To address these issues, multibiometrics, which use multiple biometric traits to improve recognition performance, have drawn increasing research attention [26].

To date, there have been numerous multibiometric methods that fuse different kinds of biometric traits based on various strategies, such as sensor-, feature-, score- and decision-level fusions [26]. For example, Raghavendra et al. [31] fused visible and near infrared face images at the sensor level to improve the face verification performance of their model. Jing et al. [17] fused multiple modalities of biometrics, such as faces and palmprints, for personal recognition. Wang et al. [36] first captured palmprint and palm-vein images by using general color and near-infrared cameras and then fused them at the feature level. Nigam et al. [29] proposed a hand-based biometric system by fusing finger-knuckle-print (FKP) and palmprint traits at the matching score level. Monwar et al. [27] first extracted face, ear and signature features and then fused them at the rank-based decision level. Yang et al. [41] proposed a multimodal personal identification by fusing fingerprints and finger-veins at the feature level. Zhang et al. [44] and Li et al. [21] proposed a finger-based trimodal biometric recognition method by extracting and fusing fingerprint, FKP and finger-vein features. In addition, Gupta et al. [11] proposed a palm-dorsa vein-based multimodal authentication method by fusing multiple features extracted by different algorithms. Additional multibiometric fusion strategies and applications can be found in a conducted multibiometric survey [26]. It is noted that most existing multibiometrics usually fuse completely different biometric traits, such as by combining faces and fingerprints and combining palmprints and palm-veins. These multibiometrics usually require multiple sensors and different feature extraction methods, making them difficult to use in practical applications. In addition, most fusion strategies, such as feature-, score- and decision-level fusions, need to extract features from individual biometric traits and then fuse them by an addition scheme, making the dimensionality of the final multibiometric feature large. Moreover, the correlation of multiple biometric traits cannot be effectively exploited [26] by these fusion-based multibiometric technologies. How to effectively exploit and combine multibiometric features for biometric recognition remains a central and challenging problem.

In this paper, we propose a new multibiometric recognition method by combining multiple similar hand-based biometric traits. Specifically, we jointly learn the collaborative feature representations of multi-instance hand-based biometric traits, such as the multiple FKPs of the same hand and the left and right palmprints of the same subject. We first form direction data vectors to specifically sample the important direction information in hand-based biometric images. Then, we jointly learn a multi-instance feature projection function that can convert the multi-instance biometric data vectors into compact binary feature codes, where the weights of different biometric instances are also adaptively learned. Finally, we form a histogram representation of the compact binary codes as the multi-instance biometric feature descriptor. Fig. 1 shows the basic idea of the proposed method. We conduct comparison experiments on the widely used multi-instance FKP and palmprint databases with state-of-the-art methods to demonstrate the effectiveness of the proposed method.

The main contributions of this paper can be summarized as follows.

  • We propose a new multi-instance hand-based biometric feature descriptor that jointly learns multi-instance biometric discriminative features and their common collaborative representations.

  • Unlike in most existing addition-based multibiometric fusions, we exploit the collaborative representations of multi-instance biometric traits, so that the final learned feature descriptor is compact. In particular, our method automatically learns the weight settings of multiple instances of biometric traits.

  • We conduct extensive experiments on both the FKP and palmprint databases, and the experimental results show that the proposed method outperforms the state-of-the-art feature descriptors on multi-instance biometric recognition tasks. The simple yet efficient obtained multibiometric representation demonstrates the potential of the proposed method to serve as a baseline for multi-instance biometric recognition.

The rest of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 proposes the joint multi-instance hand-based biometric feature learning method. Section 4 provides the experimental results, and Section 5 offers the conclusion of the paper.

Section snippets

Related work

In this section, we briefly review three related topics, including multi-instance biometric recognition technologies, multibiometric fusion schemes and hand-based biometric feature extraction and representation.

Learning a compact multi-instance FKP feature descriptor

In this section, we first present the objective function of our proposed joint learning-based multi-instance hand-based biometric descriptor (JLMHBD) and its optimization, and then introduce how to use the JLMHBD for biometric recognition.

Experiments

In this section, we conduct comparative experiments on the widely used multi-instance FKP and palmprint databases to evaluate the proposed method. All experiments are conducted on the same platform including a PC with a dual-core Intel(R) i7-7700 (3.60 GHz) CPU and 16 GB of RAM, along with MATLAB 8.3.0 in the Windows 10 OS.

Conclusion

In this paper, we propose a new multi-instance hand-based biometric feature descriptor by jointly learning the collaborative and compact feature representations of multi-instance biometric traits. We first exploit the intrinsic and informative direction data from multi-instance hand-based biometric images. Then, we jointly learn the common feature codes of multi-instance biometric traits and adaptively set the weights of different instances to obtain the collaborative multi-instance biometric

CRediT authorship contribution statement

Lunke Fei: Conceptualization, Methodology, Software, Writing - original draft. Bob Zhang: Validation, Writing - review & editing, Supervision. Chunwei Tian: Validation, Writing - review & editing. Shaohua Teng: Writing - review & editing, Supervision. Jie Wen: Validation, Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under grants 61702110 and 61972102, in part by the Natural Science Foundation of Guangdong Province 2019A1515011811, in part by the Guangzhou Science and Technology Plan Project under grant 202002030110, in part by the Research and Development Program of Guangdong Province under grant 2020B010166006.

References (50)

  • Y. Luo et al.

    Local line directional pattern for palmprint recognition

    Pattern Recogn.

    (2016)
  • S. Modak et al.

    Multibiometric fusion strategy and its applications: a review

    Inf. Fusion

    (2019)
  • A. Nigam et al.

    Designing an accurate hand biometric based authentication system fusing finger knuckleprint and palmprint

    Neurocomputing

    (2015)
  • R. Raghavendra et al.

    Particle swarm optimization based fusion of near infrared and visible image for improved face verification

    Pattern Recogn.

    (2011)
  • X. Wang et al.

    Palmprint verification based on 2D-Gabor wavelet and pulse-coupled neural network

    Knowl. Based Syst.

    (2012)
  • J. Wang et al.

    Person recognition by fusing palmprint and palm vein images based on Laplacianpalm representation

    Pattern Recogn.

    (2008)
  • J. Wen et al.

    Unified embedding alignment with missing views inferring for incomplete multi-view clustering

  • J. Yang et al.

    Feature-level fusion of fingerprint and finger-vein for personal identification

    Pattern Recogn. Lett.

    (2012)
  • L. Zhang et al.

    Towards contactless palmprint recognition: a novel device, a new benchmark, and a collaborative representation based identification approach

    Pattern Recogn.

    (2017)
  • L. Zhang et al.

    Online finger-knuckle-print verification for personal authentication

    Pattern Recogn.

    (2010)
  • S. Zhao et al.

    Joint deep convolutional feature representation for hyperspectral palmprint recognition

    Inf. Sci.

    (2019)
  • Y. Duan et al.

    Context-aware local binary feature learning for face recognition

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2018)
  • L. Fei et al.

    Feature extraction methods for palmprint recognition: a survey and evaluation

    IEEE Trans. Syst. Man Cybernet.: Syst.

    (2018)
  • L. Fei et al.

    Joint multiview feature learning for hand-print recognition

    IEEE Trans. Instrum. Meas.

    (2020)
  • L. Fei et al.

    Learning discriminant direction binary palmprint descriptor

    IEEE Trans. Image Process.

    (2019)
  • Cited by (18)

    • Joint multi-type feature learning for multi-modality FKP recognition

      2023, Engineering Applications of Artificial Intelligence
    • Linear discriminant analysis with generalized kernel constraint for robust image classification

      2023, Pattern Recognition
      Citation Excerpt :

      To solve the problem that the traditional LDA is susceptible to noise and the number of projection directions, Wen et al. [21] proposed a robust sparse LDA (RSLDA) method, which achieved satisfactory performances in image classification. More recently, a variety of feature representation approaches based on LDA have been developed and obtained superior performance in biometric recognition [22,23]. For instance, Fei et al. [24] designed a feature learning algorithm based on a discriminant direction binary palmprint descriptor (DDBPD) for palmprint recognition, which minimized the intra-class distance and maximized the inter-class distance simultaneously.

    • Multiple instance classification: Bag noise filtering for negative instance noise cleaning

      2021, Information Sciences
      Citation Excerpt :

      The instance in the bag does not have an outcome associated to it, it only belongs to its respective bags. MIC applications range from image classification in medical domains [20,39], speech recognition [34] or biometric recognition [11]. The complex nature of MIC datasets have led to the creation of custom algorithms for diverse tasks, from image tracking [14] to video classification [26], or even the adaptation of preprocessing techniques from standard paradigms, such as feature selection [5].

    • Joint Discriminative Analysis With Low-Rank Projection for Finger Vein Feature Extraction

      2024, IEEE Transactions on Information Forensics and Security
    View all citing articles on Scopus
    View full text