Skip to main content

Interpretability-Guided Human Feedback During Neural Network Training

  • Conference paper
  • First Online:
Pattern Recognition and Image Analysis (IbPRIA 2023)

Abstract

When models make wrong predictions, a typical solution is to acquire more data related to the error: an expensive process known as active learning. Our supervised classification approach combines active learning with interpretability so the user can correct such mistakes during the model’s training. At the end of each epoch, our training pipeline shows examples of mistaken cases to the user, using interpretability to allow the user to visualise which regions of the images are receiving the model’s attention. The user can then guide the training through a regularisation term in the loss function. This approach differs from previous works where the user’s role was to annotate unlabelled data since, in this proposal, the user directly influences the training procedure through the loss function. Overall, in low-data regimens, the proposed method returned lower loss values in the predictions made for all three datasets used: 0.61, 0.47, 0.36, when compared with fully automated training methods using the same amount of data: 0.63, 0.52, 0.41, respectively. We also observed higher accuracy values in two datasets: 81.14% and 92.58% over the 78.41% and 92.52% seen in fully automated methods.

Supported by the Portuguese Foundation for Science and Technology — FCT within PhD grant 2020.06434.BD.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://challenge.isic-archive.com/landing/2017/.

  2. 2.

    https://www.kaggle.com/c/aptos2019-blindness-detection.

  3. 3.

    https://github.com/PedroSerran0/ig-human-feedback-nn.

References

  1. Adhikari, B., Huttunen, H.: Iterative bounding box annotation for object detection. In: 2020 25th International Conference on Pattern Recognition (ICPR), pp. 4040–4046. IEEE (2021)

    Google Scholar 

  2. Albuquerque, T., Cardoso, J.S.: Embedded regularization for classification of colposcopic images. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 1920–1923. IEEE (2021)

    Google Scholar 

  3. Ancona, M., Ceolini, E., Öztireli, C., Gross, M.: Towards better understanding of gradient-based attribution methods for deep neural networks. arXiv preprint arXiv:1711.06104 (2017)

  4. Budd, S., Robinson, E.C., Kainz, B.: A survey on active learning and human-in-the-loop deep learning for medical image analysis. Med. Image Anal. 71, 102062 (2021)

    Article  Google Scholar 

  5. Codella, N.C., et al.: Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC). In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018), pp. 168–172. IEEE (2018)

    Google Scholar 

  6. Fischer, M., Kobs, K., Hotho, A.: NICER: aesthetic image enhancement with humans in the loop. arXiv preprint arXiv:2012.01778 (2020)

  7. Kim, B., Doshi-Velez, F.: Interpretable machine learning: the fuss, the concrete and the questions. ICML Tutor. Interpret. Mach. Learn. (2017)

    Google Scholar 

  8. Kokhlikyan, N., et al.: Captum: A unified and generic model interpretability library for pytorch. arXiv preprint arXiv:2009.07896 (2020)

  9. Lage, I., Ross, A., Gershman, S.J., Kim, B., Doshi-Velez, F.: Human-in-the-loop interpretability prior. Adv. Neural Inf. Process. Syst. 31, 10180–10189 (2018)

    Google Scholar 

  10. Le, T.N., Sugimoto, A., Ono, S., Kawasaki, H.: Toward interactive self-annotation for video object bounding box: recurrent self-learning and hierarchical annotation based framework. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3231–3240 (2020)

    Google Scholar 

  11. Liu, Z., Wang, J., Gong, S., Lu, H., Tao, D.: Deep reinforcement active learning for human-in-the-loop person re-identification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6122–6131 (2019)

    Google Scholar 

  12. Mahapatra, D., Poellinger, A., Reyes, M.: Graph node based interpretability guided sample selection for active learning. IEEE Trans. Med. Imaging 42(3), 661–673 (2022)

    Google Scholar 

  13. Mahapatra, D., Poellinger, A., Reyes, M.: Interpretability-guided inductive bias for deep learning based medical image. Med. Image Anal. 81, 102551 (2022)

    Article  Google Scholar 

  14. McKinney, S.M., et al.: International evaluation of an AI system for breast cancer screening. Nature 577(7788), 89–94 (2020)

    Article  Google Scholar 

  15. Rajendran, P.T., Espinoza, H., Delaborde, A., Mraidha, C.: Human-in-the-loop learning for safe exploration through anomaly prediction and intervention. In: Proceedings of SafeAI, AAAI (2022)

    Google Scholar 

  16. Ren, P., et al.: A survey of deep active learning. ACM Comput. Surv. 54(9), 1–40 (2021)

    Google Scholar 

  17. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision (2017)

    Google Scholar 

  18. Settles, B.: Active learning literature survey. Computer Sciences Technical report 1648, University of Wisconsin-Madison (2009)

    Google Scholar 

  19. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: International Conference on Machine Learning, pp. 3145–3153. PMLR (2017)

    Google Scholar 

  20. Silva, W., Fernandes, K., Cardoso, M.J., Cardoso, J.S.: Towards complementary explanations using deep neural networks. In: Stoyanov, D., et al. (eds.) MLCN/DLF/IMIMIC -2018. LNCS, vol. 11038, pp. 133–140. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-02628-8_15

    Chapter  Google Scholar 

  21. Silva, W., et al.: Computer-aided diagnosis through medical image retrieval in radiology. Sci. Rep. 12(1), 20732 (2022)

    Article  MathSciNet  Google Scholar 

  22. Silva, W., Poellinger, A., Cardoso, J.S., Reyes, M.: Interpretability-guided content-based medical image retrieval. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12261, pp. 305–314. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59710-8_30

    Chapter  Google Scholar 

  23. Smailagic, A., Costa, P., Noh, H.Y., Walawalkar, D., Khandelwal, K., et al.: MedAL: accurate and robust deep active learning for medical image analysis. In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE (2018)

    Google Scholar 

  24. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328. PMLR (2017)

    Google Scholar 

  25. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint (2013)

    Google Scholar 

  26. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning. PMLR (2019)

    Google Scholar 

  27. Uehara, K., Nosato, H., Murakawa, M., Sakanashi, H.: Object detection in satellite images based on active learning utilizing visual explanation. In: 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA), pp. 27–31. IEEE (2019)

    Google Scholar 

  28. Zhang, L., Wang, X., Fan, Q., Ji, Y., Liu, C.: Generating manga from illustrations via mimicking manga creation workflow. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5642–5651 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tiago Gonçalves .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Serrano e Silva, P., Cruz, R., Shihavuddin, A.S.M., Gonçalves, T. (2023). Interpretability-Guided Human Feedback During Neural Network Training. In: Pertusa, A., Gallego, A.J., Sánchez, J.A., Domingues, I. (eds) Pattern Recognition and Image Analysis. IbPRIA 2023. Lecture Notes in Computer Science, vol 14062. Springer, Cham. https://doi.org/10.1007/978-3-031-36616-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-36616-1_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-36615-4

  • Online ISBN: 978-3-031-36616-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics