Skip to main content
Log in

Augmenting the Training Set of Histological Images with Adversarial Examples

  • Published:
Programming and Computer Software Aims and scope Submit manuscript

Abstract

In this paper, we consider the problem of augmenting a set of histological images with adversarial examples to improve the robustness of neural network classifiers trained on the augmented set against adversarial attacks. In recent years, neural network methods have been developing rapidly, achieving impressive results. However, they are vulnerable to so-called adversarial attacks; i.e., they make incorrect predictions on input images with added carefully crafted imperceptible noise. Hence, the reliability of neural network methods remains an important area of research. In this paper, we compare different methods for training set augmentation to improve the robustness of neural histological image classifiers against adversarial attacks. For this purpose, we augment the training set with adversarial examples generated by several popular methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.

Similar content being viewed by others

REFERENCES

  1. Goodfellow, I.J., Shlens, J., and Szegedy, Ch., Explaining and harnessing adversarial examples, 2014.

  2. Su, J., Vargas, D.V., and Sakurai, K., One pixel attack for fooling deep neural networks, IEEE Trans. Evol. Comput., 2019, no. 5, pp. 828–841.

  3. Carlini, N. and Wagner, D., Towards evaluating the robustness of neural networks, Proc. IEEE Symp. Security and Privacy (SP), 2017, pp. 39–57.

  4. Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P., Deepfool: A simple and accurate method to fool deep neural networks, Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 2574–2582.

  5. He, K., Zhang, X., Ren, Sh., and Sun, J., Deep residual learning for image recognition, Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 770–778.

  6. Kather, J.N., Halama, N., and Marx, A., 100 000 histological images of human colorectal cancer and healthy tissue, 2018.

  7. Liang, B., Li, H., Su, M., Li, X., Shi, W., and Wang, X., Detecting adversarial image examples in deep neural networks with adaptive noise reduction, IEEE Trans. Dependable Secure Comput., 2018, no. 1, pp. 72–85.

  8. Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A., Distillation as a defense to adversarial perturbations against deep neural networks, Proc. IEEE Symp. Security and Privacy (SP), 2016, pp. 582–597.

  9. Xiao, Ch., Li, B., Zhu, J.-Y., He, W., Liu, M., and Song, D., Generating adversarial examples with adversarial networks, 2018.

  10. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, Sh., Courville, A., and Bengio, Y., Generative adversarial nets, Advances in Neural Information Processing Systems, 2014.

  11. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A., Towards deep learning models resistant to adversarial attacks, 2017.

  12. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T., Analyzing and improving the image quality of stylegan, Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 8110–8119.

  13. Rony, J., Hafemann, L.G., Oliveira, L.S., Ayed, I.B., Sabourin, R., and Granger, E., Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses, Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2019, pp. 4322–4330.

  14. Khvostikov, A., Krylov, A., Mikhailov, I., Malkov, P., and Danilova, N., Tissue type recognition in whole slide histological images, 2021.

Download references

Funding

This work was supported by the Russian Science Foundation, project no. 22-21-00081.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to N. D. Lokshin, A. V. Khvostikov or A. S. Krylov.

Ethics declarations

The authors declare that they have no conflicts of interest.

Additional information

Translated by Yu. Kornienko

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lokshin, N.D., Khvostikov, A.V. & Krylov, A.S. Augmenting the Training Set of Histological Images with Adversarial Examples. Program Comput Soft 49, 187–191 (2023). https://doi.org/10.1134/S0361768823030027

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0361768823030027

Navigation