Abstract:
Neural networks are widely used in many applications ranging from classification to control. While these networks are composed of simple arithmetic operations, they are c...Show MoreMetadata
Abstract:
Neural networks are widely used in many applications ranging from classification to control. While these networks are composed of simple arithmetic operations, they are challenging to formally verify for properties such as reachability due to the presence of nonlinear activation functions. In this paper, we make the observation that Lipschitz continuity of a neural network not only can play a major role in the construction of reachable sets for neural-network controlled systems but also can be systematically controlled during training of the neural network. We build on this observation to develop a novel verification-aware knowledge distillation framework that transfers the knowledge of a trained network to a new and easier-to-verify network. Experimental results show that our method can substantially improve reachability analysis of neural-network controlled systems for several state-of-the-art tools.
Date of Conference: 04-07 November 2019
Date Added to IEEE Xplore: 27 December 2019
ISBN Information: