Authors:
Mallek Mziou-Sallami
1
;
2
and
Faouzi Adjed
3
;
2
Affiliations:
1
CEA, Evry, France
;
2
IRT SystemX, Palaiseau, France
;
3
Expleo Group, Montigny-le-Bretonneux, France
Keyword(s):
NN Robustness, Uncertainty in AI, Perception, Abstract Interpretation.
Abstract:
Deep learning models do not achieve sufficient confidence, explainability and transparency levels to be integrated into safety-critical systems. In the context of DNN-based image classifier, robustness have been first studied under simple image attacks (2D rotation, brightness), and then, subsequently, under other geometrical perturbations. In this paper, we intend to introduce a new method to certify deep image classifiers against convolutional attacks. Using the abstract interpretation theory, we formulate the lower and upper bounds with abstract intervals to support other classes of advanced attacks including image filtering. We experiment the proposed method on MNIST and CIFAR10 databases and several DNN architectures. The obtained results show that convolutional neural networks are more robust against filtering attacks. Multilayered perceptron robustness decreases when increasing number of neurons and hidden layers. These results prove that the complexity of DNN models improves
prediction’s accuracy but often impacts robustness.
(More)