Skip to main content

Multi-view X-Ray R-CNN

  • Conference paper
  • First Online:
Pattern Recognition (GCPR 2018)

Abstract

Motivated by the detection of prohibited objects in carry-on luggage as a part of avionic security screening, we develop a CNN-based object detection approach for multi-view X-ray image data. Our contributions are two-fold. First, we introduce a novel multi-view pooling layer to perform a 3D aggregation of 2D CNN-features extracted from each view. To that end, our pooling layer exploits the known geometry of the imaging system to ensure geometric consistency of the feature aggregation. Second, we introduce an end-to-end trainable multi-view detection pipeline based on Faster R-CNN, which derives the region proposals and performs the final classification in 3D using these aggregated multi-view features. Our approach shows significant accuracy gains compared to single-view detection while even being more efficient than performing single-view detection in each view.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Unfortunately, we are not able to release the dataset to the public. Researchers wishing to evaluate on our dataset for comparison purposes are invited to contact the corresponding author.

  2. 2.

    The number of annotated objects is a restriction of the dataset only; our detector is able to handle multiple objects per image.

References

  1. Akçay, S., Kundegorski, M.E., Devereux, M., Breckon, T.P.: Transfer learning using convolutional neural networks for object classification within X-ray baggage security imagery. In: ICIP, pp. 1057–1061 (2016). https://doi.org/10.1109/ICIP.2016.7532519

  2. Aubry, M., Maturana, D., Efros, A.A., Russell, B., Sivic, J.: Seeing 3D chairs: exemplar part-based 2D-3D alignment using a large dataset of CAD models. In: CVPR, pp. 3762–3769 (2014). https://doi.org/10.1109/CVPR.2014.487

  3. Baştan, M.: Multi-view object detection in dual-energy X-ray images. Mach. Vis. Appl. 26(7–8), 1045–1060 (2015). https://doi.org/10.1007/s00138-015-0706-x

    Article  Google Scholar 

  4. Brudy, T., Schilhab, S.: Projection of hazardous items into X-ray images of inspection objects. Patent WO 2016/001282 AI, January 2016

    Google Scholar 

  5. Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: CVPR, pp. 1907–1915 (2017). https://doi.org/10.1109/CVPR.2017.691

  6. Everingham, M., Eslami, S.M.A., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL visual object classes challenge: a retrospective. Int. J. Comput. Vis. 111(1), 98–136 (2014). https://doi.org/10.1007/s11263-014-0733-5

    Article  Google Scholar 

  7. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010). https://doi.org/10.1007/s11263-009-0275-4

    Article  Google Scholar 

  8. Franzel, T., Schmidt, U., Roth, S.: Object detection in multi-view X-Ray images. In: Pinz, A., Pock, T., Bischof, H., Leberl, F. (eds.) DAGM/OAGM 2012. LNCS, vol. 7476, pp. 144–154. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32717-9_15

    Chapter  Google Scholar 

  9. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: CVPR, pp. 3354–3361 (2012). https://doi.org/10.1109/CVPR.2012.6248074

  10. Girshick, R.: Fast R-CNN. In: ICCV, pp. 1440–1448. IEEE (2015). https://doi.org/10.1109/ICCV.2015.169

  11. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 38(1), 142–158 (2016). https://doi.org/10.1109/TPAMI.2015.2437384

    Article  Google Scholar 

  12. Goyal, P., et al.: Accurate, large minibatch SGD: training ImageNet in 1 hour. arXiv:1706.02677 (2017)

  13. Gupta, S., Arbeláez, P., Girshick, R., Malik, J.: Aligning 3D models to RGB-D images of cluttered scenes. In: CVPR, pp. 4731–4740 (2015). https://doi.org/10.1109/CVPR.2015.7299105

  14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90

  15. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  16. Jaccard, N., Rogers, T.W., Morton, E.J., Griffin, L.D.: Automated detection of smuggled high-risk security threats using deep learning. In: ICDP 2016, pp. 1–6 (2016). https://doi.org/10.1049/ic.2016.0079

  17. Jaccard, P.: The distribution of the Flora in the Alpine Zone. New Phytol. 11(2), 37–50 (1912). https://doi.org/10.1111/j.1469-8137.1912.tb05611.x

    Article  Google Scholar 

  18. Li, B.: 3D fully convolutional network for vehicle detection in point cloud. In: IROS, pp. 1513–1518 (2016). https://doi.org/10.1109/IROS.2017.8205955

  19. Mery, D., Svec, E., Arias, M., Riffo, V., Saavedra, J.M., Banerjee, S.: Modern computer vision techniques for X-ray testing in baggage inspection. IEEE Trans. Syst. Man Cybern. Syst. 15(2), 682–692 (2016). https://doi.org/10.1109/TSMC.2016.2628381

    Article  Google Scholar 

  20. Mousavian, A., Anguelov, D., Flynn, J., Kos̆ecká, J.: 3D bounding box estimation using deep learning and geometry. In: CVPR, pp. 5632–5640 (2017). https://doi.org/10.1109/CVPR.2017.597

  21. Qi, C.R., Liu, W., Wu, C., Su, H., Guibas, L.J.: Frustum pointnets for 3D object detection from RGB-D data. In: CVPR, pp. 918–927 (2018)

    Google Scholar 

  22. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. CVPR (2017). https://doi.org/10.1109/CVPR.2017.690

  23. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017). https://doi.org/10.1109/TPAMI.2016.2577031

    Article  Google Scholar 

  24. Romea, A.C., Torres, M.M., Srinivasa, S.: The MOPED framework: object recognition and pose estimation for manipulation. Int. J. Robot. Res. 30(10), 1284–1306 (2011). https://doi.org/10.1177/0278364911401765

    Article  Google Scholar 

  25. Rothganger, F., Lazebnik, S., Schmid, C., Ponce, J.: 3D object modeling and recognition using local affine-invariant image descriptors and multi-view spatial constraints. Int. J. Comput. Vis. 66(3), 231–259 (2006). https://doi.org/10.1007/s11263-005-3674-1

    Article  Google Scholar 

  26. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y

    Article  MathSciNet  Google Scholar 

  27. Song, S., Xiao, J.: Sliding shapes for 3D object detection in depth images. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 634–651. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_41

    Chapter  Google Scholar 

  28. Song, S., Xiao, J.: Deep sliding shapes for amodal 3D object detection in RGB-D images. In: CVPR, pp. 808–816 (2016). https://doi.org/10.1109/CVPR.2016.94

  29. Su, H., Maji, S., Kalogerakis, E., Learned-Miller, E.G.: Multi-view convolutional neural networks for 3D shape recognition. In: ICCV, pp. 945–953 (2015). https://doi.org/10.1109/ICCV.2015.114

  30. Tulsiani, S., Malik, J.: Viewpoints and keypoints. In: CVPR, pp. 1510–1519 (2015)

    Google Scholar 

  31. Vatti, B.R.: A generic solution to polygon clipping. Commun. ACM 35(7), 56–63 (1992). https://doi.org/10.1145/129902.129906

    Article  Google Scholar 

  32. Xiang, Y., Choi, W., Lin, Y., Savarese, S.: Data-driven 3D voxel patterns for object category recognition. In: CVPR, pp. 1903–1911 (2015). https://doi.org/10.1109/CVPR.2015.7298800

  33. Xiang, Y., Choi, W., Lin, Y., Savarese, S.: Subcategory-aware convolutional neural networks for object proposals and detection. In: WACV, pp. 924–933 (2017). https://doi.org/10.1109/WACV.2017.108

  34. Zhu, M., et al.: Single image 3D object detection and pose estimation for grasping. In: ICRA, pp. 3936–3943 (2014). https://doi.org/10.1109/ICRA.2014.6907430

Download references

Acknowledgements

The authors gratefully acknowledge support by Smiths Heimann GmbH.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Faraz Saeedan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Steitz, JM.O., Saeedan, F., Roth, S. (2019). Multi-view X-Ray R-CNN. In: Brox, T., Bruhn, A., Fritz, M. (eds) Pattern Recognition. GCPR 2018. Lecture Notes in Computer Science(), vol 11269. Springer, Cham. https://doi.org/10.1007/978-3-030-12939-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-12939-2_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-12938-5

  • Online ISBN: 978-3-030-12939-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics