Abstract:
Deep learning-based auto-driving systems are vulnerable to adversarial examples attacks which may result in wrong decision making and accidents. An adversarial example ca...Show MoreMetadata
Abstract:
Deep learning-based auto-driving systems are vulnerable to adversarial examples attacks which may result in wrong decision making and accidents. An adversarial example can fool the well trained neural networks by adding barely imperceptible perturbations to clean data. In this paper, we explore the mechanism of adversarial examples and adversarial robustness from the perspective of statistical mechanics, and propose an statistical mechanics-based interpretation model of adversarial robustness. The state transition caused by adversarial training based on the theory of fluctuation dissipation disequilibrium in statistical mechanics is formally constructed. Besides, we fully study the adversarial example attacks and training process on system robustness, including the influence of different training processes on network robustness. Our work is helpful to understand and explain the adversarial examples problems and improve the robustness of deep learning-based auto-driving systems.
Published in: IEEE Transactions on Intelligent Transportation Systems ( Volume: 23, Issue: 7, July 2022)