Elsevier

Pattern Recognition

Volume 34, Issue 2, February 2001, Pages 255-270
Pattern Recognition

A cost-effective fingerprint recognition system for use with low-quality prints and damaged fingertips

https://doi.org/10.1016/S0031-3203(00)00003-0Get rights and content

Abstract

The development of a robust algorithm allowing good recognition of low-quality fingerprints with inexpensive hardware is investigated. A threshold FFT approach is developed to simultaneously smooth and enhance poor quality images derived from a database of imperfect prints. Features are extracted from the enhanced images using a number of approaches including a novel wedge ring overlay minutia detector that is particularly robust to imperfections. Finally, a number of neural net and statistically based classifiers are evaluated for the recognition task. Results for various combinations of the process are presented and discussed with regard to their utility in such a system.

Introduction

Fingerprints are imprints or impressions of patterns formed by friction ridges of the skin in the fingers and thumbs. The uniqueness of such patterns was recognised by past cultures and their use as a means of personal identification has been in existence as early as the third century B.C. The ability to confirm the identity of crime suspects and to learn the identity of unknown felons from prints left at the scene of a crime is of obvious benefit to society. Thus, it is not surprising that law enforcement agencies are the ones to have made the greatest advances in techniques and equipment in use of fingerprints for positive identification. Closely allied with the art of criminal apprehension, human fingerprints can provide an unequalled method of personal identification for various other applications. These include security or access control systems, banking and credit systems and forensic or surveillance systems.

A shortcoming with many of the existing recognition techniques is that they all require a reasonable standard of input image. Many sophisticated fingerprint enhancement algorithms do exist [1], [2], [3]. However, these techniques are suited for police work, where processing time does not have to be within 1 or 2 s. Thus for a low quality, cheap, fast commercial system there is abundant room for improvement. Coetzee and Botha [4] have investigated this problem with reasonable success. They proposed a recursive line following binarisation and smoothing pre-processing, followed by feature extraction using a Fourier wedge-ring detector which detects minutia which manifest themselves as small deviations from the dominant spatial frequency of the ridges.

A further complication which has not yet been addressed is that of “bad” prints. This refers to fingerprints where the distinguishing patterns are unclear. This may be the result of one of several situations. As one gets older, one's fingerprints tend to wear very quickly and the definition between valleys and ridges significantly deteriorates. Elderly people's ability to replace the cells that build up this definition diminishes, with the result that their fingerprint patterns are difficult to obtain. Manual workers, particularly in rural and mining areas, subject their fingertips to severe punishment, resulting in injuries, scars, calluses and false grooves. All of these contribute to false minutia. Additionally, any system must be robust to people who obtain temporary cuts. Finally, some recognition algorithms are hindered by fingers with significantly different ridge thicknesses both within and between different fingerprints. Combining both low-quality images with low-quality prints, to facilitate a cheap and efficient commercial recognition system, presents a serious challenge. This is the problem that is addressed in this paper.

The noise in fingerprint images is highly correlated and often the statistics of such noises are unknown. Although noise covariance may be estimated or assumed to be Markov spatially, it is not unreasonable to expect that probabilistic noise reduction approaches will not work very well. A better alternative to the statistical approach would be a problem specific technique and thus an efficient fingerprint specific enhancement procedure has been developed. Both binarisation and line thinning are considered and efficiently dealt with. The large amount of false minutia generated may be dealt with in two ways. Xiao and Raafat [5] proposed a post-processing, minutia purification stage to eliminate spurious minutia. This is a costly procedure time-wise and therefore impractical for the problem.

A far simpler, more efficient post-processing procedure is suggested and tested. The other approach, which is the main focus of this paper, is to use the total image (or a robust representation thereof ) to find a match. Thus, a rotationally and translationally invariant description of the entire fingerprint (perhaps with emphasis on minutia) is required for recognition purposes. Standard probabilistic classifiers, neural network classifiers and the newer probabilistic neural network are compared and their performance at the task at hand is evaluated.

Section snippets

Data generation

Many techniques of obtaining digital fingerprint images for identification applications exist. These may be loosely divided into inking and optical methods.

A large number of optical methods exist. These may briefly be divided into prism, holographic and laser systems. The basic principle of operation is uniform for all these methods and thus the prism method was selected due to its low cost and easy availability.

The prism method operates on the principle of total internal reflection and uses

Image enhancement

The quality of a fingerprint image that is obtained using the optical, prism method contains serious degradation's. Image quality is poor due to non-perfect or non-uniform contact of the finger with the prism surface. This causes breaks in the ridges, bridges between ridges and overall grey-scale intensity variations. Furthermore, a poor contrast with a highly noisy background exists. In addition, the dirt and oil left behind by previous prints further negatively affects the print quality. In

Feature extraction

Minutia are characteristics of a fingerprint that are useful for identification. They are essentially interruptions of the normal flow of the ridges such as ridge endings, bifurications, dots, short ridges, bridges, loops, crossovers and interruptions.

Many automatic recognition systems only consider ridge endings and bifurications as relevant minutia [11], [12], [13]. A reason for doing so is that it is more likely that bridge and island structures are due to false minutia and a large number of

Transform feature extraction via spectral analysis

Since fingerprints are broadly composed of periodic structures (the ridges and valleys), it should be natural to examine them in the frequency domain in order to extract relevant features. Often the minutia in a fingerprint correspond to different spatial frequency components than the fingerprint ridges. The actual frequencies associated with the minutia should vary from print to print and it is this difference that may be exploited in order to extract meaningful information for classification

Feature classification

The final stage of the fingerprint recognition system is the classification of the relevant feature vectors. Thus, the task of the classifier is to assign an unknown feature vector to one of many known classes. Owing to the fact that each unique individual (whose prints represent one class) is previously known, only supervised learning paradigms were considered.

Many different classifiers were tested as previous work has often placed the emphasis on the feature extraction stage and then only

Results

Many of the classifiers have variables or parameters which require optimisation, e.g. number of neurons, etc. Furthermore, the number of segments in the feature extractors also require optimisation. Thus, results presented here are for the already optimised classifier variables and only a few relevant feature extractor parameters are shown in order to indicate relevant trends. The numbers in the keys on the graphs indicate the number of prints used in the training stage. Most previous work has

Conclusions

A solution to the problem of a cost effective, efficient fingerprint recognition system for use with both low-quality prints and damaged fingertips was investigated. Many fingerprint systems are commercially available, but most of these are too expensive and have not been fully tested on damaged or elderly people's fingerprints.

Initially a database of the full spectrum of damaged, elderly people's and unusual fingerprints was required. The nature of the hardware used to obtain the fingerprint

About the Author—ANDREW JOHN WILLIS graduated in Theoretical Physics from Cambridge University UK and went on to postgraduate studies at London University in Solid State Electronics. His first research position was at the Council for Scientific Research in South Africa where his work on the fabrication of microwave devices led to a Ph.D. in 1987. Later as a Research Associate at University College London, he worked on radar signal processing for the space programme and in 1991 accepted a

References (21)

There are more references available in the full text version of this article.

Cited by (106)

View all citing articles on Scopus

About the Author—ANDREW JOHN WILLIS graduated in Theoretical Physics from Cambridge University UK and went on to postgraduate studies at London University in Solid State Electronics. His first research position was at the Council for Scientific Research in South Africa where his work on the fabrication of microwave devices led to a Ph.D. in 1987. Later as a Research Associate at University College London, he worked on radar signal processing for the space programme and in 1991 accepted a teaching position at the University of Witwatersrand Johannesburg. Recently Dr Willis moved to Canada where he is a consultant to General Electric. Dr Willis has published over 50 papers and is member of IEEE/ IEE. His interests are in statistical signal processing, pattern recognition, telecommunications and recently power systems.

View full text