ABSTRACT
Neural networks (NNs) are beneficial to many services, and we believe systems—such as OSes, databases, networked systems—are not an exception. But applying NNs in these critical systems is challenging: people have to risk getting unexpected outcomes from NNs because NN behaviors are not well-defined. To tame these undefined behaviors, we introduce a framework ouroboros, which builds verified NNs that follow user-defined specifications. These specifications comprise input and output constraints which characterize the behaviors of a NN. We do a case study on database learned indexes to demonstrate that training verified NN models is possible. Though many challenges remain, ouroboros enables us, for the first time, to apply NNs in critical systems with _confidence_.
- competition for neural network verification (VNN-COMP). https://sites.google.com/view/vnn20/vnncomp.Google Scholar
- Neuralverification.jl. https://github.com/sisl/NeuralVerification.jl.Google Scholar
- A. B. Arrieta, N. Díaz-Rodríguez, J. Del Ser, A. Bennetot, S. Tabik, A. Barbado, S. García, S. Gil-López, D. Molina, R. Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82–115, 2020.Google ScholarDigital Library
- R. Bunel, I. Turkaslan, P. H. Torr, P. Kohli, and M. P. Kumar. A unified view of piecewise linear neural network verification. arXiv preprint arXiv:1711.00455, 2017.Google Scholar
- A. Dethise, M. Canini, and N. Narodytska. Analyzing Learning-Based Networked Systems with Formal Verification. In Proceedings of INFOCOM'21, May 2021.Google ScholarDigital Library
- F. K. Došilović, M. Brčić, and N. Hlupić. Explainable artificial intelligence: A survey. In 2018 41st International convention on information and communication technology, electronics and microelectronics (MIPRO), 2018.Google Scholar
- M. Hashemi, K. Swersky, J. Smith, G. Ayers, H. Litz, J. Chang, C. Kozyrakis, and P. Ranganathan. Learning memory access patterns. In International Conference on Machine Learning, 2018.Google Scholar
- G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In Proc. CAV, 2017.Google ScholarCross Ref
- Y. Kazak, C. Barrett, G. Katz, and M. Schapira. Verifying deep-rl-driven systems. In Proceedings of the 2019 Workshop on Network Meets AI & ML, pages 83–89, 2019.Google ScholarDigital Library
- T. Kraska, A. Beutel, E. H. Chi, J. Dean, and N. Polyzotis. The case for learned index structures. In Proceedings of the 2018 International Conference on Management of Data, 2018.Google ScholarDigital Library
- S. Krishnan, Z. Yang, K. Goldberg, J. Hellerstein, and I. Stoica. Learning to optimize join queries with deep reinforcement learning. arXiv preprint arXiv:1808.03196, 2018.Google Scholar
- C. Liu, T. Arnon, C. Lazarus, C. Barrett, and M. J. Kochenderfer. Algorithms for verifying deep neural networks. arXiv:1903.06758, 2019.Google Scholar
- A. Lomuscio and L. Maganti. An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351, 2017.Google Scholar
- M. Maas, D. G. Andersen, M. Isard, M. M. Javanmard, K. S. McKinley, and C. Raffel. Learning-based memory allocation for c++ server workloads. In Proc. ASPLOS, 2020.Google ScholarDigital Library
- H. Mao, M. Schwarzkopf, S. B. Venkatakrishnan, Z. Meng, and M. Alizadeh. Learning scheduling algorithms for data processing clusters. In Proc. SIGCOMM. 2019.Google ScholarDigital Library
- C. Müller, G. Singh, M. Püschel, and M. Vechev. Neural network robustness verification on gpus. arXiv preprint arXiv:2007.10868, 2020.Google Scholar
- K. Pei, Y. Cao, J. Yang, and S. Jana. Deepxplore: Automated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles, pages 1–18, 2017.Google ScholarDigital Library
- K. Pei, Y. Cao, J. Yang, and S. Jana. Towards practical verification of machine learning: The case of computer vision systems. arXiv preprint arXiv:1712.01785, 2017.Google Scholar
- L. Pulina and A. Tacchella. Never: a tool for artificial neural networks verification. Annals of Mathematics and Artificial Intelligence, 62(3):403–425, 2011.Google Scholar
- S. Salman, C. Streiffer, H. Chen, T. Benson, and A. Kadav. Deepconf: Automating data center network topologies management with machine learning. In Proceedings of the 2018 Workshop on Network Meets AI & ML, 2018.Google ScholarDigital Library
- H. Wang, J. Yang, H.-S. Lee, and S. Han. Learning to design circuits. arXiv preprint arXiv:1812.02734, 2018.Google Scholar
- S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana. Formal security analysis of neural networks using symbolic intervals. In Proc. USENIX Security, 2018.Google Scholar
- W. Xiang, H.-D. Tran, and T. T. Johnson. Reachable set computation and safety verification for neural networks with relu activations. arXiv preprint arXiv:1712.08163, 2017.Google Scholar
- J. M. Zhang, M. Harman, L. Ma, and Y. Liu. Machine learning testing: Survey, landscapes and horizons. IEEE Transactions on Software Engineering, 2020.Google ScholarDigital Library
Index Terms
- Building verified neural networks with specifications for systems
Recommendations
A Sound Abstraction Method Towards Efficient Neural Networks Verification
Verification and Evaluation of Computer and Communication SystemsAbstractWith the increasing application of neural networks (NN) in safety-critical systems, the (formal) verification of NN is becoming more than essential. Although several NN verification techniques have been developed in recent years, these techniques ...
Interval Weight-Based Abstraction for Neural Network Verification
Computer Safety, Reliability, and Security. SAFECOMP 2022 WorkshopsAbstractIn recent years, neural networks (NNs) have gained much maturity and efficiency, and their applications have spread to various domains, including some modules of safety-critical systems. On the other hand, recent studies have demonstrated that NNs ...
Verifying the Safety of Autonomous Systems with Neural Network Controllers
This article addresses the problem of verifying the safety of autonomous systems with neural network (NN) controllers. We focus on NNs with sigmoid/tanh activations and use the fact that the sigmoid/tanh is the solution to a quadratic differential ...
Comments