Abstract
Deep neural networks have gained widespread usage in a number of applications. However, limitations such as lack of explainability and robustness inhibit building trust in their behavior, which is crucial in safety critical applications such as autonomous driving. Therefore, techniques which aid in understanding and providing guarantees for neural network behavior are the need of the hour. In this paper, we present a case study applying a recently proposed technique, Prophecy, to analyze the behavior of a neural network model, provided by our industry partner and used for autonomous guiding of airplanes on taxi runways. This regression model takes as input an image of the runway and produces two outputs, cross-track error and heading error, which represent the position of the plane relative to the center line. We use the Prophecy tool to extract neuron activation patterns for the correctness and safety properties of the model. We show the use of these patterns to identify features of the input that explain correct and incorrect behavior. We also use the patterns to provide guarantees of consistent behavior. We explore a novel idea of using sequences of images (instead of single images) to obtain good explanations and identify regions of consistent behavior.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Gopinath, D., Converse, H., Pasareanu, C., Taly, A.: Property inference for deep neural networks. In: 34th International Conference on Automated Software Engineering (ASE), pp. 797–809. IEEE (2019)
Beland, S., et al.: Towards assurance evaluation of autonomous systems. In: ICCAD (2020)
Frew, E., et al.: Vision-based road-following using a small autonomous aircraft. In: 2004 IEEE Aerospace Conference Proceedings (IEEE Cat. No. 04TH8720), vol. 5, pp. 3006–3015 (2004)
X-plane flight simulator. https://www.x-plane.com/
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017, pp. 3145–3153 (2017)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August, pp. 1135–1144 (2016)
Nohara, Y., Matsumoto, K., Soejima, H., Nakashima, N.: Explanation of machine learning models using improved Shapley additive explanation. In: Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB 2019, Niagara Falls, NY, USA, 7–10 September 2019, p. 546 (2019)
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Towards proving the adversarial robustness of deep neural networks. In: Proceedings First Workshop on Formal Verification of Autonomous Vehicles, FVAV@iFM 2017, Turin, Italy, 19th September, 2017, pp. 19–26 (2017)
Julian, K.D., Kochenderfer, M.J.: Guaranteeing safety for neural network-based aircraft collision avoidance systems, CoRR, vol. abs/1912.07084 (2019)
Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-cam++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847. IEEE (2018)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. Int. J. Comput. Vis. 128(2), 336–359 (2020)
Katz, G., et al.: The Marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_26
Julian, K.D., Lee, R., Kochenderfer, M.J.: Validation of image-based neural network controllers through adaptive stress testing. In: 23rd IEEE International Conference on Intelligent Transportation Systems, ITSC 2020, Rhodes, Greece, 20–23 September, pp. 1–7 (2020)
Akintunde, M., Lomuscio, A., Maganti, L., Pirovano, E.: Reachability analysis for neural agent-environment systems. In: Principles of Knowledge Representation and Reasoning: Proceedings of the Sixteenth International Conference, KR 2018, Tempe, Arizona, 30 October–2 November 2018, pp. 184–193 (2018)
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, Canada, 3–8 December 2018, pp. 6369–6379 (2018)
Liu, C., Arnon, T., Lazarus, C., Strong, C.A., Barrett, C.W., Kochenderfer, M.J.: Algorithms for verifying deep neural networks. Found. Trends Optim. 4(3–4), 244–404 (2021)
Ivanov, R., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verisig: verifying safety properties of hybrid systems with neural network controllers, CoRR, vol. abs/1811.01828 (2018)
Yang, X., Tran, H.D., Xiang, W., Johnson, T.: Reachability analysis for feed-forward neural networks using face lattices, CoRR, vol. abs/2003.01226 (2020)
Wu, H., et al.: Parallelization techniques for verifying neural networks. In: 2020 Formal Methods in Computer Aided Design, FMCAD 2020, Haifa, Israel, 21–24 September 2020, pp. 128–137 (2020)
Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, 27–30 June 2016, pp. 2921–2929 (2016)
Patro, B.N., Lunayach, M., Patel, S., Namboodiri, V.P.: U-CAM: visual explanation using uncertainty based class activation maps, CoRR, vol. abs/1908.06306 (2019)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014, Workshop Track Proceedings (2014)
Millan, M., Achard, C.: Explaining regression based neural network model, CoRR, vol. abs/2004.06918 (2020)
Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks, CoRR, vol. abs/1706.04599 (2017). http://arxiv.org/abs/1706.04599
Kim, E., Gopinath, D., Pasareanu, C., Seshia, S.A.: A programmatic and semantic approach to explaining and debugging neural network based object detectors. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, 13–19 June 2020, pp. 11 125–11 134 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Kadron, I.B., Gopinath, D., Păsăreanu, C.S., Yu, H. (2022). Case Study: Analysis of Autonomous Center Line Tracking Neural Networks. In: Bloem, R., Dimitrova, R., Fan, C., Sharygina, N. (eds) Software Verification. NSV VSTTE 2021 2021. Lecture Notes in Computer Science(), vol 13124. Springer, Cham. https://doi.org/10.1007/978-3-030-95561-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-95561-8_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95560-1
Online ISBN: 978-3-030-95561-8
eBook Packages: Computer ScienceComputer Science (R0)