Reference Hub3
Cancelable Fusion of Face and Ear for Secure Multi-Biometric Template

Cancelable Fusion of Face and Ear for Secure Multi-Biometric Template

Padma P. Paul, Marina L. Gavrilova
Copyright: © 2013 |Volume: 7 |Issue: 3 |Pages: 15
ISSN: 1557-3958|EISSN: 1557-3966|EISBN13: 9781466633896|DOI: 10.4018/ijcini.2013070105
Cite Article Cite Article

MLA

Paul, Padma P., and Marina L. Gavrilova. "Cancelable Fusion of Face and Ear for Secure Multi-Biometric Template." IJCINI vol.7, no.3 2013: pp.80-94. http://doi.org/10.4018/ijcini.2013070105

APA

Paul, P. P. & Gavrilova, M. L. (2013). Cancelable Fusion of Face and Ear for Secure Multi-Biometric Template. International Journal of Cognitive Informatics and Natural Intelligence (IJCINI), 7(3), 80-94. http://doi.org/10.4018/ijcini.2013070105

Chicago

Paul, Padma P., and Marina L. Gavrilova. "Cancelable Fusion of Face and Ear for Secure Multi-Biometric Template," International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) 7, no.3: 80-94. http://doi.org/10.4018/ijcini.2013070105

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Biometric fusion to achieve multimodality has emerged as a highly successful new approach to combat problems of unimodal biometric system such as intraclass variability, interclass similarity, data quality, non-universality, and sensitivity to noise. The authors have proposed new type of biometric fusion called cancelable fusion. The idea behind the cancelable biometric or cancelability is to transform a biometric data or feature into a new one so that the stored biometric template can be easily changed in a biometric security system. Cancelable fusion does the fusion of multiple biometric trait in addition it preserve the properties of cancelability. In this paper, the authors present a novel architecture for template generation within the context of the cancelable multibiometric fusion. The authors develop a novel cancelable biometric template generation algorithm using cancelable fusion, random projection and transformation-based feature extraction and selection. The authors further validate the performance of the proposed algorithm on a virtual multimodal face and ear database.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.