Abstract
In this paper, we consider the problem of augmenting a set of histological images with adversarial examples to improve the robustness of neural network classifiers trained on the augmented set against adversarial attacks. In recent years, neural network methods have been developing rapidly, achieving impressive results. However, they are vulnerable to so-called adversarial attacks; i.e., they make incorrect predictions on input images with added carefully crafted imperceptible noise. Hence, the reliability of neural network methods remains an important area of research. In this paper, we compare different methods for training set augmentation to improve the robustness of neural histological image classifiers against adversarial attacks. For this purpose, we augment the training set with adversarial examples generated by several popular methods.
Similar content being viewed by others
REFERENCES
Goodfellow, I.J., Shlens, J., and Szegedy, Ch., Explaining and harnessing adversarial examples, 2014.
Su, J., Vargas, D.V., and Sakurai, K., One pixel attack for fooling deep neural networks, IEEE Trans. Evol. Comput., 2019, no. 5, pp. 828–841.
Carlini, N. and Wagner, D., Towards evaluating the robustness of neural networks, Proc. IEEE Symp. Security and Privacy (SP), 2017, pp. 39–57.
Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P., Deepfool: A simple and accurate method to fool deep neural networks, Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 2574–2582.
He, K., Zhang, X., Ren, Sh., and Sun, J., Deep residual learning for image recognition, Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 770–778.
Kather, J.N., Halama, N., and Marx, A., 100 000 histological images of human colorectal cancer and healthy tissue, 2018.
Liang, B., Li, H., Su, M., Li, X., Shi, W., and Wang, X., Detecting adversarial image examples in deep neural networks with adaptive noise reduction, IEEE Trans. Dependable Secure Comput., 2018, no. 1, pp. 72–85.
Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A., Distillation as a defense to adversarial perturbations against deep neural networks, Proc. IEEE Symp. Security and Privacy (SP), 2016, pp. 582–597.
Xiao, Ch., Li, B., Zhu, J.-Y., He, W., Liu, M., and Song, D., Generating adversarial examples with adversarial networks, 2018.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, Sh., Courville, A., and Bengio, Y., Generative adversarial nets, Advances in Neural Information Processing Systems, 2014.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A., Towards deep learning models resistant to adversarial attacks, 2017.
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T., Analyzing and improving the image quality of stylegan, Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 8110–8119.
Rony, J., Hafemann, L.G., Oliveira, L.S., Ayed, I.B., Sabourin, R., and Granger, E., Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses, Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 4322–4330.
Khvostikov, A., Krylov, A., Mikhailov, I., Malkov, P., and Danilova, N., Tissue type recognition in whole slide histological images, 2021.
Funding
This work was supported by the Russian Science Foundation, project no. 22-21-00081.
Author information
Authors and Affiliations
Corresponding authors
Ethics declarations
The authors declare that they have no conflicts of interest.
Additional information
Translated by Yu. Kornienko
Rights and permissions
About this article
Cite this article
Lokshin, N.D., Khvostikov, A.V. & Krylov, A.S. Augmenting the Training Set of Histological Images with Adversarial Examples. Program Comput Soft 49, 187–191 (2023). https://doi.org/10.1134/S0361768823030027
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S0361768823030027