Skip to main content

Abstract

Machine Learning and Deep Learning models make accurate predictions based on a specifically trained task. For instance, models that classify ship vessel types based on their trajectory and other features. This can support human experts while they try to obtain information on the ships, e.g., to control illegal fishing. Besides the support in predicting a certain ship type, there is a need to explain the decision-making behind the classification. For example, which features contributed the most to the classification of the ship type. This paper introduces existing explanation approaches to the task of ship classification. The underlying model is based on a Residual Neural Network. The model was trained on an AIS data set. Further, we illustrate the explainability approaches by means of an explanatory case study and conduct a first experiment with a human expert.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gundogdu, E., Solmaz, B., Ycesoy, V., Koç, A.: Marvel: A large-scale image dataset for maritime vessels. In: Lai, S.H., Lepetit, V., Nishino, K., Sato, Y. (eds.) Asian Conference on Computer Vision, pp. 165–180. Springer, Cham (2016)

    Google Scholar 

  2. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  3. Anneken, M., Strenger, M., Robert, S., Beyerer J.: Classification of Maritime Vessels using Convolutional Neural Networks. UR-AI 2020, accepted for publication (2020)

    Google Scholar 

  4. Tetreault, B.J.: Use of the Automatic Identification System (AIS) for maritime domain awareness (MDA). In: Proceedings of OCEANS 2005 MTS/IEEE, pp. 1590–1594. IEEE, September 2005

    Google Scholar 

  5. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608) (2017

  6. Denadai, E.P.: Model Interpretability of Deep Neural Networks (2020). http://www.ncbi.nlm.nih.gov

  7. Shapley, L.S.: A value for n-person games. Contrib. Theory Games 2(28), 307–317 (1953)

    MathSciNet  MATH  Google Scholar 

  8. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)

    Google Scholar 

  9. Molnar, C.: Interpretable machine learning. Lulu.com (2019)

    Google Scholar 

  10. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  Google Scholar 

  11. Fisher, A., Rudin, C., Dominici, F.: Model class reliance: variable importance measures for any machine learning model class, from the “rashomon" perspective. arXiv preprint arXiv:1801.01489, p. 68 (2018)

  12. Ribeiro, M. T., Singh, S., Guestrin, C.: “Why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  13. Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810 (2018)

  14. Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gershman, S., Doshi-Velez, F.: An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1902.00006 (2019)

  15. Schmidt, P., Biessmann, F.: Quantifying interpretability and trust in machine learning systems. arXiv preprint arXiv:1901.08558 (2019)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nadia Burkart .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Burkart, N., Huber, M.F., Anneken, M. (2021). Supported Decision-Making by Explainable Predictions of Ship Trajectories. In: Herrero, Á., Cambra, C., Urda, D., Sedano, J., Quintián, H., Corchado, E. (eds) 15th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2020). SOCO 2020. Advances in Intelligent Systems and Computing, vol 1268. Springer, Cham. https://doi.org/10.1007/978-3-030-57802-2_5

Download citation

Publish with us

Policies and ethics