Skip to main content

DeepKey: Towards End-to-End Physical Key Replication from a Single Photograph

  • Conference paper
  • First Online:
Pattern Recognition (GCPR 2018)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11269))

Included in the following conference series:

  • 2864 Accesses

Abstract

This paper describes DeepKey, an end-to-end deep neural architecture capable of taking a digital RGB image of an ‘everyday’ scene containing a pin tumbler key (e.g. lying on a table or carpet) and fully automatically inferring a printable 3D key model. We report on the key detection performance and describe how candidates can be transformed into physical prints. We show an example opening a real-world lock. Our system is described in detail, providing a breakdown of all components including key detection, pose normalisation, bitting segmentation and 3D model inference. We provide an in-depth evaluation and conclude by reflecting on limitations, applications, potential security risks and societal impact. We contribute the DeepKey Datasets of 5, 300+ images covering a few test keys with bounding boxes, pose and unaligned mask data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    DeepKey Datasets can be requested via https://data.bris.ac.uk.

References

  1. Adelson, E.H., Bergen, J.R., Burt, P.J., Ogden, J.M.: Pyramid methods in image processing. RCA Eng. 29(4), 33–41 (1984)

    Google Scholar 

  2. Burgess, B., Wustrow, E., Halderman, J.A.: Replication prohibited: attacking restricted keyways with 3D-printing. In: WOOT (2015)

    Google Scholar 

  3. Dai, J., et al.: Deformable convolutional networks. CoRR abs/1703.06211 (2017). https://doi.org/10.1109/ICCV.2017.89, http://arxiv.org/abs/1703.06211

  4. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2012 (VOC 2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html

  5. Girshick, R.B.: Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1440–1448 (2015). https://doi.org/10.1109/ICCV.2015.169

  6. Girshick, R.B., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR abs/1311.2524 (2013). https://doi.org/10.1109/CVPR.2014.81, http://arxiv.org/abs/1311.2524

  7. He, K., Gkioxari, G., Dollár, P., Girshick, R.B.: Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988 (2017). https://doi.org/10.1109/ICCV.2017.322

  8. He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. CoRR abs/1406.4729 (2014). https://doi.org/10.1109/TPAMI.2015.2389824, http://arxiv.org/abs/1406.4729

    Article  Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  10. Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K.: Spatial transformer networks. In: NIPS (2015)

    Google Scholar 

  11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012). https://doi.org/10.1145/3065386

    Article  Google Scholar 

  12. Laxton, B., Wang, K., Savage, S.: Reconsidering physical key secrecy: teleduplication via optical decoding. In: ACM Conference on Computer and Communications Security (2008). https://doi.org/10.1145/1455770.1455830

  13. Lin, T.Y., Dollár, P., Girshick, R.B., He, K., Hariharan, B., Belongie, S.J.: Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 936–944 (2017). https://doi.org/10.1109/CVPR.2017.106

  14. Lin, T., et al.: Microsoft COCO: common objects in context. CoRR abs/1405.0312 (2014). http://arxiv.org/abs/1405.0312

  15. Marius Kintel, C.W.: Openscad. http://www.blender.org/

  16. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. CoRR abs/1612.08242 (2016). http://arxiv.org/abs/1612.08242

  17. Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2015). https://doi.org/10.1109/TPAMI.2016.2577031

    Article  Google Scholar 

  18. Schmidhuber, J.: Deep learning in neural networks: an overview. CoRR abs/1404.7828 (2014). https://doi.org/10.1016/j.neunet.2014.09.003, http://arxiv.org/abs/1404.7828

    Article  Google Scholar 

  19. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626 (2017). https://doi.org/10.1109/ICCV.2017.74

  20. Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: OverFeat: integrated recognition, localization and detection using convolutional networks. CoRR abs/1312.6229 (2013)

    Google Scholar 

  21. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014)

    Google Scholar 

  22. Ultimaker BV: Ultimaker 2 plus technical specifications (2016). https://ultimaker.com/en/products/ultimaker-2-plus/specifications

  23. Univeristy of Bristol Advanced Computing Research Centre: Blue crystal phase 4 (2017). https://www.acrc.bris.ac.uk/acrc/phase4.htm

  24. Zakka, K.: Spatial transformer network implementation, January 2017. https://github.com/kevinzakka/spatial-transformer-network

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rory Smith .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Smith, R., Burghardt, T. (2019). DeepKey: Towards End-to-End Physical Key Replication from a Single Photograph. In: Brox, T., Bruhn, A., Fritz, M. (eds) Pattern Recognition. GCPR 2018. Lecture Notes in Computer Science(), vol 11269. Springer, Cham. https://doi.org/10.1007/978-3-030-12939-2_34

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-12939-2_34

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-12938-5

  • Online ISBN: 978-3-030-12939-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics