Skip to main content
Log in

Demonstrable and anatomy-driven knuckle identification via crease map segmentation

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Images of the human hand can be effectively deployed to assist with the identification of the perpetrators of serious crimes. One of the prominent and distinguishing features of the human hand is found in the skin of the finger knuckle regions, which includes creases forming complex and distinctive patterns. Exploiting knuckle skin crease patterning in the identification of perpetrators requires manual labelling from expert anthropologists, which is both laborious and time-consuming. Existing approaches for automatic knuckle recognition work in a black-box manner without explicitly revealing the causes of a match or no match. Whereas, the court-room proceedings demand a more transparent and reproducible matching procedure driven from anatomy and comparison of skin creases. Hence, development of automated algorithms to segment (trace) the knuckle creases and compare them exclusively can make the whole process demonstrable and convincing. This paper proposes an effective framework for knuckle crease identification that can directly work on full hand dorsal images to (i) localize the knuckle regions effectively, (ii) segment (trace) the knuckle creases and (iii) effectively compare knuckles through the segmented crease maps. The novel matching of knuckle creases is achieved through explicit comparison of the creases themselves and is investigated with a large public dataset to demonstrate the potential of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data Availability

The public hand databases 11k and HD are respectively available at: https://sites.google.com/view/11khands (accessed on 21 December 2021) and http://www4.comp.polyu.edu.hk/ csajaykr/knuckleV2.htm (accessed on 21 December 2021). The proprietary HUQ database (which is specifically collected under the H-Unique project) is still under collection.

References

  1. Jaswal, G., Kaul, A., Nath, R.: Knuckle print biometrics and fusion schemes-overview, challenges, and solutions. ACM Comput. Surv. (CSUR) 49(2), 1–46 (2016)

    Article  MATH  Google Scholar 

  2. Vyas, R., Rahmani, H., Boswell-Challand, R., Angelov, P., Black, S., Williams, B.M.: Robust end-to-end hand identification via holistic multi-unit knuckle recognition. In: 2021 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–8 (2021). IEEE

  3. Kumar, A., Zhou, Y.: Human identification using KnuckleCodes. In: IEEE 3rd International Conference on Biometrics: Theory, Applications and Systems, BTAS 2009, pp. 1–6 (2009)

  4. Kumar, A., Zhou, Y.: Personal identification using finger knuckle orientation features. Electron. Lett. 45(20), 1023–1025 (2009)

    Article  MATH  Google Scholar 

  5. Zhang, L., Zhang, L., Zhang, D., Zhu, H.: Online finger-knuckle-print verification for personal authentication. Pattern Recogn. 43(7), 2560–2571 (2010)

    Article  MATH  Google Scholar 

  6. Zhang, L., Zhang, L., Zhang, D., Zhu, H.: Ensemble of local and global information for finger knuckle-print recognition. Pattern Recogn. 44(9), 1990–1998 (2011)

    Article  MATH  Google Scholar 

  7. Cheng, K., Kumar, A.: Contactless finger knuckle identification using smartphones. In: Proceedings of the International Conference of the Biometrics Special Interest Group, BIOSIG 2012, pp. 1–6. IEEE (2012)

  8. Kumar, A., Wang, B.: Recovering and matching minutiae patterns from finger knuckle images. Pattern Recogn. Lett. 68, 361–367 (2015)

    Article  MATH  Google Scholar 

  9. Edmond, G.: Pathological science? demonstrable reliability and expert forensic pathology evidence. Demonstrable Reliability and Expert Forensic Pathology Evidence (March 11, 2008). UNSW Law Research Paper (2008-6) (2008)

  10. Vyas, R., Williams, B.M., Rahmani, H., Boswell-Challand, R., Jiang, Z., Angelov, P., Black, S.: Ensemble-based bounding box regression for enhanced knuckle localization. Sensors 22(4), 1569 (2022)

    Article  Google Scholar 

  11. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 234–241 (2015). Springer

  12. Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)

  13. Kirillov, A., He, K., Girshick, R., Dollár, P.: A unified architecture for instance and semantic segmentation (2017)

  14. Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2881–2890 (2017)

  15. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 801–818 (2018)

  16. Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: Unet++: A nested u-net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11. Springer (2018)

  17. Everingham, M., Eslami, S., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes challenge: a retrospective. Int. J. Comput. Vis. 111(1), 98–136 (2015)

    Article  MATH  Google Scholar 

  18. Zitova, B., Flusser, J.: Image registration methods: a survey. Image Vis. Comput. 21(11), 977–1000 (2003)

    Article  MATH  Google Scholar 

  19. Yakubovskiy, P.: Segmentation Models Pytorch. GitHub (2020)

  20. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009). IEEE

  21. Mosinska, A., Marquez-Neila, P., Koziński, M., Fua, P.: Beyond the pixel-wise loss for topology-aware delineation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3136–3145 (2018)

  22. Wiedemann, C., Heipke, C., Mayer, H., Jamet, O.: Empirical evaluation of automatically extracted road axes. Empir. Eval. Tech. Comput. Vis. 12, 172–187 (1998)

    MATH  Google Scholar 

  23. Kumar, A., Xu, Z.: Personal identification using minor knuckle patterns from palm dorsal surface. IEEE Trans. Inf. Forensics Secur. 11(10), 2338–2348 (2016)

    Article  Google Scholar 

  24. Tarawneh, A.S., Hassanat, A.B., Alkafaween, E., Sarayrah, B., Mnasri, S., Altarawneh, G.A., Alrashidi, M., Alghamdi, M., Almuhaimeed, A.: Deepknuckle: deep learning for finger knuckle print recognition. Electronics 11(4), 513 (2022)

    Article  Google Scholar 

  25. Heidari, H., Chalechale, A.: Biometric authentication using a deep learning approach based on different level fusion of finger knuckle print and fingernail. Expert Syst. Appl. 191, 116278 (2022)

    Article  MATH  Google Scholar 

  26. Huang, G., Liu, Z., Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

  27. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

Download references

Funding

The work in this publication is supported by funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 787768).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, B.M.W. and R.V.; methodology, B.M.W. and R.V.; software, R.V.; validation, B.M.W., H.R., R.V., and Z.J.; formal analysis, R.V. and B.M.W.; investigation, R.V. and B.M.W.; resources, B.M.W. and R.V.; data curation, R.B.-C., B.M.W., and R.V.; writing-original draft preparation, R.V.; writing-review and editing, B.M.W., H.R., and S.B.; visualization, B.M.W.; supervision, B.M.W., H.R., and P.A.; project administration, B.M.W. and S.B.; and funding acquisition, S.B. All authors have read and agreed to the published version of the manuscript.

Corresponding authors

Correspondence to Ritesh Vyas or Bryan Williams.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Ethics approval and consent to participate

The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Faculty of Health and Medicine Research Ethics Committee (FHMREC) of Lancaster University (FHMREC Reference: FHMREC18001, dated 5 October 2018). Informed consent was obtained from all subjects involved in the study.

Consent for publication

Not applicable.

Materials availability

Not applicable.

Code availability

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vyas, R., Williams, B., Rahmani, H. et al. Demonstrable and anatomy-driven knuckle identification via crease map segmentation. SIViP 19, 318 (2025). https://doi.org/10.1007/s11760-025-03940-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11760-025-03940-z

Keywords