Abstract
Coverage-guided Fuzz Testing (CGF) techniques have been applied to deep neural network (DNN) testing in recent years, generating a significant number of test samples to uncover inherent defects in DNN models. However, the effectiveness of CGF techniques that utilize structured coverage metrics as coverage criteria is currently being questioned. A few unstructured coverage metrics, such as surprise adequacy, only take into account the diversity of the test samples against the training set, while ignoring the diversity of the test samples themselves. In addition to this, the existing surprise adequacy metrics have some limitations in their applications. Therefore, this paper proposes DeepTD, a diversity-guided deep neural networks test generation method. Firstly, DeepTD selects high-loss test samples from each class on average, ensuring these test seeds possess a strong ability to reveal model errors. Then, DeepTD transforms these test seeds to enhance the diversity of the generated samples. Finally, Cluster-based Surprise Adequacy is designed to guide the generation of test samples. To evaluate the effectiveness of DeepTD, six DNN models are selected as subjects, covering two well-known image datasets. Experimental results demonstrate that the Cluster-based Surprise Adequacy outperforms the two existing metrics not only in computational efficiency but also in discovering more model defects. What’s more, the test samples generated by DeepTD are on average 6.04% and 3.24% more effective for model retraining in MNIST and CIFAR10 compared to baseline methods, respectively.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aghababaeyan, Z., Abdellatif, M., Briand, L., Ramesh, S., Bagherzadeh, M.: Black-box testing of deep neural networks through test case diversity. IEEE Trans. Softw. Eng. (2023)
Chen, C., Seff, A., Kornhauser, A., Xiao, J.: Deepdriving: learning affordance for direct perception in autonomous driving. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2722–2730 (2015)
Du, X., Xie, X., Li, Y., Ma, L., Zhao, J., Liu, Y.: Deepcruiser: automated guided testing for stateful deep learning systems. arXiv preprint arXiv:1812.05339 (2018)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Guo, J., Jiang, Y., Zhao, Y., Chen, Q., Sun, J.: DLFuzz: differential fuzzing testing of deep learning systems. In: Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 739–743 (2018)
Hartigan, J.A., Wong, M.A.: Algorithm AS 136: a k-means clustering algorithm. J. Roy. Stat. Soc. Ser. C (Appl. Stat.) 28(1), 100–108 (1979)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Kim, J., Feldt, R., Yoo, S.: Guiding deep learning system testing using surprise adequacy. In: 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), pp. 1039–1049. IEEE (2019)
LeCun, Y.: The MNIST database of handwritten digits (1998). https://yann.lecun.com/exdb/mnist/
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Lee, S., Cha, S., Lee, D., Oh, H.: Effective white-box testing of deep neural networks with adaptive neuron-selection strategy. In: Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 165–176 (2020)
Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., Alsaadi, F.E.: A survey of deep neural network architectures and their applications. Neurocomputing 234, 11–26 (2017)
Ma, L., et al.: DeepCT: tomographic combinatorial testing for deep learning systems. In: 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 614–618. IEEE (2019)
Ma, L., et al.: DeepGauge: multi-granularity testing criteria for deep learning systems. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pp. 120–131 (2018)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Odena, A., Olsson, C., Andersen, D., Goodfellow, I.: TensorFuzz: debugging neural networks with coverage-guided fuzzing. In: International Conference on Machine Learning, pp. 4901–4911. PMLR (2019)
Pang, B., Nijkamp, E., Wu, Y.N.: Deep learning with tensorflow: a review. J. Educ. Behav. Stat. 45(2), 227–248 (2020)
Pei, K., Cao, Y., Yang, J., Jana, S.: DeepXplore: automated whitebox testing of deep learning systems. In: proceedings of the 26th Symposium on Operating Systems Principles, pp. 1–18 (2017)
Riccio, V., Jahangirova, G., Stocco, A., Humbatova, N., Weiss, M., Tonella, P.: Testing machine learning based systems: a systematic mapping. Empir. Softw. Eng. 25, 5193–5254 (2020)
Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)
Segura, S., Fraser, G., Sanchez, A.B., Ruiz-Cortés, A.: A survey on metamorphic testing. IEEE Trans. Softw. Eng. 42(9), 805–824 (2016)
Shen, D., Wu, G., Suk, H.I.: Deep learning in medical image analysis. Ann. Rev. Biomed. Eng. 19, 221–248 (2017)
Tao, C., Tao, Y., Guo, H., Huang, Z., Sun, X.: DLRegion: coverage-guided fuzz testing of deep neural networks with region-based neuron selection strategies. Inf. Softw. Technol., 107266 (2023)
Tian, Y., Pei, K., Jana, S., Ray, B.: DeepTest: automated testing of deep-neural-network-driven autonomous cars. In: Proceedings of the 40th International Conference on Software Engineering, pp. 303–314 (2018)
Wang, J., et al.: Robot: robustness-oriented testing for deep learning systems. In: 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pp. 300–311. IEEE (2021)
Xie, X., et al.: DeepHunter: a coverage-guided fuzz testing framework for deep neural networks. In: Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 146–157 (2019)
Zhang, P., Dai, Q., Ji, S.: Condition-guided adversarial generative testing for deep learning systems. In: 2019 IEEE International Conference on Artificial Intelligence Testing (AITest), pp. 71–72. IEEE (2019)
Zhang, P., Ren, B., Dong, H., Dai, Q.: CAGFuzz: coverage-guided adversarial generative fuzzing testing for image-based deep learning systems. IEEE Trans. Softw. Eng. 48(11), 4630–4646 (2021)
Acknowledgment
This work is supported by the National Natural Science Foundation of China (No. 62202223), the Natural Science Foundation of Jiangsu Province (No. BK20220881), the Open Fund of the State Key Laboratory for Novel Software Technology (No. KFKT2021B32), the Fundamental Research Funds for the Central Universities (No. NT2022027) and the Postgraduate Research Practice Innovation Program of NUAA (No. xcxjh20221613).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhu, J., Tao, C., Guo, H., Ju, Y. (2024). DeepTD: Diversity-Guided Deep Neural Network Test Generation. In: Hermanns, H., Sun, J., Bu, L. (eds) Dependable Software Engineering. Theories, Tools, and Applications. SETTA 2023. Lecture Notes in Computer Science, vol 14464. Springer, Singapore. https://doi.org/10.1007/978-981-99-8664-4_24
Download citation
DOI: https://doi.org/10.1007/978-981-99-8664-4_24
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8663-7
Online ISBN: 978-981-99-8664-4
eBook Packages: Computer ScienceComputer Science (R0)