Skip to main content

On the Overlap Between Grad-CAM Saliency Maps and Explainable Visual Features in Skin Cancer Images

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12844))

Abstract

Dermatologists recognize melanomas by inspecting images in which they identify human-comprehensible visual features. In this paper, we investigate to what extent such features correspond to the saliency areas identified on CNNs trained for classification. Our experiments, conducted on two neural architectures characterized by different depth and different resolution of the last convolutional layer, quantify to what extent thresholded Grad-CAM saliency maps can be used to identify visual features of skin cancer. We found that the best threshold value, i.e., the threshold at which we can measure the highest Jaccard index, varies significantly among features; ranging from 0.3 to 0.7. In addition, we measured Jaccard indices as high as 0.143, which is almost 50% of the performance of state-of-the-art architectures specialized in feature mask prediction at pixel-level, such as U-Net. Finally, a breakdown test between malignancy and classification correctness shows that higher resolution saliency maps could help doctors in spotting wrong classifications.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Arun, N., et al.: Assessing the (un)trustworthiness of saliency maps for localizing abnormalities in medical imaging (2020)

    Google Scholar 

  2. Brinker, T.J., Hekler, A., Enk, A.H., et al.: Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task. Eur. J. Cancer 113, 47–54 (2019). https://doi.org/10.1016/j.ejca.2019.04.001

    Article  Google Scholar 

  3. Brinker, T.J., Hekler, A., Utikal, J.S., et al.: Skin cancer classification using convolutional neural networks: systematic review. J. Med. Internet Res. 20(10) (2018). https://doi.org/10.2196/11936

  4. Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: Grad-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), March 2018. https://doi.org/10.1109/wacv.2018.00097

  5. Codella, N., Rotemberg, V., Tschandl, P., et al.: Skin lesion analysis toward Melanoma detection 2018: a challenge hosted by the international skin imaging collaboration (ISIC), February 2019. arXiv: 1902.03368

  6. Codella, N.C.F., Gutman, D., Celebi, M.E., et al.: Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (ISBI), hosted by the international skin imaging collaboration (ISIC), October 2017. arXiv: 1710.05006

  7. Combalia, M., Codella, N.C.F., Rotemberg, V., et al.: BCN20000: dermoscopic lesions in the wild. arXiv:1908.02288 [cs, eess], August 2019. arXiv: 1908.02288

  8. Curiel-Lewandrowski, C., et al.: Artificial intelligence approach in Melanoma. In: Fisher, D.E., Bastian, B.C. (eds.) Melanoma, pp. 1–31. Springer, New York (2019). https://doi.org/10.1007/978-1-4614-7322-0_43-1

    Chapter  Google Scholar 

  9. Deng, J., Dong, W., Socher, R., et al.: ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE, Miami, June 2009. https://doi.org/10.1109/CVPR.2009.5206848

  10. Donahue, J., Jia, Y., Vinyals, O., et al.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31st International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 32, pp. 647–655. PMLR, Bejing, June 2014

    Google Scholar 

  11. Esteva, A., Kuprel, B., Novoa, R.A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115 (2017). https://doi.org/10.1038/nature21056

  12. Gonzalez-Diaz, I.: DermaKNet: incorporating the knowledge of dermatologists to convolutional neural networks for skin lesion diagnosis. IEEE J. Biomed. Health Inform. 23(2), 547–559 (2019). https://doi.org/10.1109/JBHI.2018.2806962

    Article  Google Scholar 

  13. Han, S.S., Kim, M.S., Lim, W., Park, G.H., Park, I., Chang, S.E.: Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J. Invest. Dermatol. 138(7), 1529–1538 (2018)

    Google Scholar 

  14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016

    Google Scholar 

  15. Holzinger, A.: Interactive machine learning for health informatics: when do we need the human-in-the-loop? Brain Inform. 3(2), 119–131 (2016)

    Google Scholar 

  16. Holzinger, A., Carrington, A., Müller, H.: Measuring the quality of explanations: the system causability scale (SCS). KI - Künstliche Intelligenz 34(2), 193–198 (2020)

    Google Scholar 

  17. Jahanifar, M., Tajeddin, N.Z., Asl, B.M., Gooya, A.: Supervised saliency map driven segmentation of lesions in dermoscopic images. IEEE J. Biomed. Health Inform. 23(2), 509–518 (2019). https://doi.org/10.1109/JBHI.2018.2839647

  18. Jahanifar, M., Tajeddin, N.Z., Koohbanani, N.A., et al.: Segmentation of skin lesions and their attributes using multi-scale convolutional neural networks and domain specific augmentations (2018)

    Google Scholar 

  19. Kapishnikov, A., Bolukbasi, T., Viegas, F., Terry, M.: XRAI: better attributions through regions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019

    Google Scholar 

  20. Khan, M.A., et al.: Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion. Microscopy Res. Tech. 82(6), 741–763 (2019). https://doi.org/10.1002/jemt.23220

    Article  Google Scholar 

  21. Mishra, N.K., Celebi, M.E.: An overview of Melanoma detection in dermoscopy images using image processing and machine learning, Janurary 2016. arXiv: 1601.07843

  22. Nunnari, F., Bhuvaneshwara, C., Ezema, A.O., Sonntag, D.: A study on the fusion of pixels and patient metadata in CNN-based classification of skin lesion images. In: Holzinger, A., Kieseberg, P., Tjoa, A., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 191–208. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57321-8_11

    Chapter  Google Scholar 

  23. Nunnari, F., Sonntag, D.: A CNN toolbox for skin cancer classification. CoRR abs/1908.08187 (2019)

    Google Scholar 

  24. Nunnari, F., Sonntag, D.: A software toolbox for deploying deep learning decision support systems with XAI capabilities. In: Companion of the 2021 ACM SIGCHI Symposium on Engineering Interactive Computing Systems. EICS 2021, pp. 44–49, Association for Computing Machinery, New York (2021). https://doi.org/10.1145/3459926.3464753

  25. Petsiuk, V., Das, A., Saenko, K.: RISE: randomized input sampling for explanation of black-box models. In: Proceedings of the British Machine Vision Conference (BMVC) (2018)

    Google Scholar 

  26. Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  27. Selvaraju, R.R., Cogswell, M., Das, A., et al.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: The IEEE International Conference on Computer Vision (ICCV), October 2017

    Google Scholar 

  28. Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2014

    Google Scholar 

  29. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, September 2014. arXiv:1409.1556

  30. Smilkov, D., Thorat, N., Kim, B., et al.: SmoothGrad: removing noise by adding noise (2017)

    Google Scholar 

  31. Sun, J., Chakraborti, T., Noble, J.A.: A comparative study of explainer modules applied to automated skin lesion classification. In: Atzmüller, M., Kliegr, T., Schmid, U. (eds.) Proceedings of the First International Workshop on Explainable and Interpretable Machine Learning (XI-ML 2020) Co-located with the 43rd German Conference on Artificial Intelligence (KI 2020), Bamberg, Germany, 21 September 2020 (Virtual Workshop). CEUR Workshop Proceedings, vol. 2796. CEUR-WS.org (2020). http://ceur-ws.org/Vol-2796/xi-ml-2020_sun.pdf

  32. Teso, S.: Toward faithful explanatory active learning with self-explainable neural nets. Interact. Adapt. Learn. 2444, 13 (2019)

    Google Scholar 

  33. Tschandl, P., Rosendahl, C., Kittler, H.: The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 5(1) (2018). https://doi.org/10.1038/sdata.2018.161

  34. Zhou, S., Zhuang, Y., Meng, R.: Multi-category skin lesion diagnosis using dermoscopy images and deep CNN ensembles (2019)

    Google Scholar 

Download references

Acknowledgements

This research is partly funded by the pAItient project (BMG) and the Endowed Chair of Applied Artificial Intelligence (Oldenburg University).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fabrizio Nunnari .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nunnari, F., Kadir, M.A., Sonntag, D. (2021). On the Overlap Between Grad-CAM Saliency Maps and Explainable Visual Features in Skin Cancer Images. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2021. Lecture Notes in Computer Science(), vol 12844. Springer, Cham. https://doi.org/10.1007/978-3-030-84060-0_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-84060-0_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-84059-4

  • Online ISBN: 978-3-030-84060-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics