Abstract
Recent advances in machine learning and artificial intelligence are now being applied in safety-critical autonomous systems where software defects may cause severe harm to humans and the environment. Design organizations in these domains are currently unable to provide convincing arguments that systems using complex software implemented using machine learning algorithms are safe and correct.
In this paper, we present an efficient method to extract equivalence classes from decision trees and random forests, and to formally verify that their input/output mappings comply with requirements. We implement the method in our tool VoRF (Verifier of Random Forests), and evaluate its scalability on two case studies found in the literature. We demonstrate that our method is practical for random forests trained on low-dimensional data with up to 25 decision trees, each with a tree depth of 20. Our work also demonstrates the limitations of the method with high-dimensional data and touches upon the trade-off between large number of trees and time taken for verification.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bastani, O., Pu, Y., Solar-Lezama, A.: Verifiable reinforcement learning via policy extraction. In: Advances in Neural Information Processing Systems (NIPS) (2018)
Breiman, L.: Classification and Regression Trees. Wadsworth International Group (1984)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). https://doi.org/10.1023/A:1010933404324
Burton, S., Gauerhof, L., Heinzemann, C.: Making the case for safety of machine learning in highly automated driving. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 5–16. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66284-8_1
DO-178C: Software Considerations in Airborne Systems and Equipment Certification. RTCA, Inc. (2012)
DO-333: Formal Methods Supplement to DO-178C and DO-278A. RTCA, Inc. (2012)
Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19
Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017). https://doi.org/10.1038/nature21056
Irsoy, O., Yildiz, O.T., Alpaydin, E.: Soft decision trees. In: International Conference on Pattern Recognition (ICPR) (2012)
ISO 26262: Road Vehicles - Functional Safety. International Organization for Standardization (2011)
Julian, K.D., Lopez, J., Brush, J.S., Owen, M.P., Kochenderfer, M.J.: Policy compression for aircraft collision avoidance systems. In: 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp. 1–10. IEEE (2016). https://doi.org/10.1109/DASC.2016.7778091
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5
Kurd, Z., Kelly, T., Austin, J.: Developing artificial neural networks for safety critical systems. Neural Comput. Appl. 16(1), 11–19 (2007). https://doi.org/10.1007/s00521-006-0039-9
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791
Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: International Conference on Machine Learning (ICML) (2018)
Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011)
Pulina, L., Tacchella, A.: Challenging SMT solvers to verify neural networks. AI Commun. 25(2), 117–135 (2012). https://doi.org/10.3233/AIC-2012-0525
Russell, S., Dewey, D., Tegmark, M.: Research priorities for robust and beneficial artificial intelligence. AI Mag. 36(4), 105–114 (2015). https://doi.org/10.1609/aimag.v36i4.2577
Scheibler, K., Winterer, L., Wimmer, R., Becker, B.: Towards verification of artificial neural networks. In: Automatic Verification and Analysis of Complex Systems (MBMV), pp. 30–40 (2015)
Seshia, S.A., Zhu, X.J., Krause, A., Jha, S.: Machine learning and formal methods (Dagstuhl Seminar 17351). In: Dagstuhl Reports. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik (2018). https://doi.org/10.4230/DagRep.7.8.55
Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016). https://doi.org/10.1038/nature16961
Acknowledgements
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Törnblom, J., Nadjm-Tehrani, S. (2019). Formal Verification of Random Forests in Safety-Critical Applications. In: Artho, C., Ölveczky, P. (eds) Formal Techniques for Safety-Critical Systems. FTSCS 2018. Communications in Computer and Information Science, vol 1008. Springer, Cham. https://doi.org/10.1007/978-3-030-12988-0_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-12988-0_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-12987-3
Online ISBN: 978-3-030-12988-0
eBook Packages: Computer ScienceComputer Science (R0)