Skip to main content
Log in

Learning of physically significant features from earth observation data: an illustration for crop classification and irrigation scheme detection

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Earth observation data processing requires interpretable deep learning (DL) models that learn physically significant and meaningful features. The current study proposes approaches to make the network to learn meaningful features. In addition, a set of interpretability- and explanation-based evaluation strategies are proposed to evaluate the DL models. Adversarial variational encoding along with constraints to regulate latent representations and embed label information are employed to learn interpretable manifold. The proposed architecture, called interpretable adversarial encoding network (IAENet), significantly improves the results compared to other main existing DL models. The proposed IAENet learns the features which are essential in distinguishing the different classes thereby improving the interpretability of the model. The explanations for the different models are generated through analysis of the concepts learned by each model using activation maximization. Besides, the relevance assigned by the model to input features is also estimated using the layer-wise relevance propagation approach. Experiments on the phenological curve-based crop classification illustrate that IAENet learn relevant features (giving importance to the non-rainy season) to distinguish different irrigation schemes. The performance can be attributed to the learned interpretable manifold, and the refinement of architectural units and convolutions considering the point-nature and irregular sampling of the input data. Experiments on learning crop-specific features from multispectral images for crop-type classification indicate that IAENet learns red and green edge features crucial in distinguishing the studied crops. The improvement in interpretability of the DL models is found to reduce the sensitivity toward network parameters. The proposed evaluation measures facilitate ascertaining the physical significance of the learned manifold.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Zhu XX, Tuia D, Mou L, Xia GS, Zhang L, Xu F, Fraundorfer F (2017) Deep learning in remote sensing: a comprehensive review and list of resources. IEEE Geosci Remote Sens Mag 5:8–36. https://doi.org/10.1109/MGRS.2017.2762307

    Article  Google Scholar 

  2. Ma L, Liu Y, Zhang X, Ye Y, Yin G, Johnson BA (2019) Deep learning in remote sensing applications: a meta-analysis and review. ISPRS J Photogramm Remote Sens 152:166–177. https://doi.org/10.1016/j.isprsjprs.2019.04.015

    Article  Google Scholar 

  3. Li S, Song W, Fang L, Chen Y, Ghamisi P, Benediktsson JA (2019) Deep learning for hyperspectral image classification: an overview. IEEE Trans Geosci Remote Sens 57:6690–6709. https://doi.org/10.1109/TGRS.2019.2907932

    Article  Google Scholar 

  4. Yuan Q, Shen H, Li T, Li Z, Li S, Jiang Y, Xu H, Tan W, Yang Q, Wang J, Gao J, Zhang L (2020) Deep learning in environmental remote sensing: achievements and challenges. Remote Sens Environ 241:111716. https://doi.org/10.1016/j.rse.2020.111716

    Article  Google Scholar 

  5. Wang L, Shi C, Diao C, Ji W, Yin D (2016) A survey of methods incorporating spatial information in image classification and spectral unmixing. Int J Remote Sens 37:3870–3910. https://doi.org/10.1080/01431161.2016.1204032

    Article  Google Scholar 

  6. Soneson C, Gerster S, Delorenzi M (2014) Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation. PLoS ONE 9:e100335. https://doi.org/10.1371/journal.pone.0100335

    Article  Google Scholar 

  7. Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, Garcia S, Gil-Lopez S, Molina D, Benjamins R, Chatila R, Herrera F (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82–115. https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  8. Samek W, Montavon G, Lapuschkin S, Anders CJ, Müller KR (2020) Toward interpretable machine learning: transparent deep neural networks and beyond, ArXiv. http://arxiv.org/abs/2003.07631

  9. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2020) Grad-CAM: visual explanations from deep networks via gradient-based localization. Int J Comput Vis 128:336–359. https://doi.org/10.1007/s11263-019-01228-7

    Article  Google Scholar 

  10. Vikranth Jeyakumar J, Noor J, Cheng YH, Garcia L, Srivastava M (2020) How can I explain this to you? An empirical study of deep neural network explanation methods. https://github.com/nesl/Explainability-Study (accessed November 14, 2020)

  11. Tjoa E, Guan C (2020) A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans Neural Networks Learn Syst. https://doi.org/10.1109/tnnls.2020.3027314

    Article  Google Scholar 

  12. Fan F, Xiong J, Wang G (2020) On Interpretability of Artificial Neural Networks, ArXiv. http://arxiv.org/abs/2001.02522

  13. Chai X, Gu H, Li F, Duan H, Hu X, Lin K (2020) Deep learning for irregularly and regularly missing data reconstruction. Sci Rep 10:1–18. https://doi.org/10.1038/s41598-020-59801-x

    Article  Google Scholar 

  14. Trifonov V, Ganea OE, Potapenko A, Hofmann T (2018) Learning and evaluating sparse interpretable sentence embeddings, ArXiv. 200–210. https://doi.org/10.18653/v1/w18-5422

  15. Subramanian A, Pruthi D, Jhamtani H, Berg-Kirkpatrick T, Hovy E (2018) SpinE: Sparse interpretable neural embeddings, In: 32nd AAAI Conf Artif Intell AAAI 2018, pp 4921–4928. http://arxiv.org/abs/1711.08792 (accessed November 14, 2020)

  16. Liu D, Sun K, Wang Z, Liu R, Zha ZJ (2020) Frank-wolfe network: an interpretable deep structure for non-sparse coding. IEEE Trans Circuits Syst Video Technol 30:3068–3080. https://doi.org/10.1109/TCSVT.2019.2936135

    Article  Google Scholar 

  17. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35:1798–1828. https://doi.org/10.1109/TPAMI.2013.50

    Article  Google Scholar 

  18. Spinner T, Körner J, Görtler J, Deussen O (2018) Towards an interpretable latent space: an intuitive comparison of autoencoders with variational autoencoders, IEEE VIS 2018. https://kops.uni-konstanz.de/handle/123456789/43657 (accessed November 5, 2020)

  19. Kovalev MS, Utkin LV, Kasimov EM (2020) SurvLIME: A method for explaining machine learning survival models, ArXiv. http://arxiv.org/abs/2003.08371 (accessed November 14, 2020)

  20. Serrano S, Smith NA (2019) Is Attention Interpretable?, ACL 2019 - 57th Annu Meet Assoc Comput Linguist Proc Conf 2931–2951. http://arxiv.org/abs/1906.03731 (accessed November 14, 2020)

  21. Shankaranarayana SM, Runje D (2019) ALIME: Autoencoder based approach for local interpretability, In: Lect Notes Comput Sci (Including Subser Lect Notes Artif Intell Lect Notes Bioinformatics), Springer, pp 454–463. https://doi.org/10.1007/978-3-030-33607-3_49

  22. Fan F, Li M, Teng Y, Wang G (2020) Soft autoencoder and its wavelet adaptation interpretation. IEEE Trans Comput Imaging 6:1245–1257. https://doi.org/10.1109/TCI.2020.3013796

    Article  Google Scholar 

  23. Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR (2017) Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognit 65:211–222. https://doi.org/10.1016/j.patcog.2016.11.008

    Article  Google Scholar 

  24. Sun Y, Mao H, Sang Y, Yi Z (2017) Explicit guiding auto-encoders for learning meaningful representation. Neural Comput Appl 28:429–436. https://doi.org/10.1007/s00521-015-2082-x

    Article  Google Scholar 

  25. Montavon G, Samek W, Müller KR (2018) Methods for interpreting and understanding deep neural networks. Digit Signal Process A Rev J 73:1–15. https://doi.org/10.1016/j.dsp.2017.10.011

    Article  MathSciNet  Google Scholar 

  26. Lapuschkin S, Binder A, Montavon G, Muller KR, Samek W (2016) Analyzing classifiers: fisher vectors and deep neural networks, In: Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit, IEEE Computer Society pp 2912–2920. https://doi.org/10.1109/CVPR.2016.318

  27. Cheriyadat AM (2014) Unsupervised feature learning for aerial scene classification. IEEE Trans Geosci Remote Sens 52:439–451. https://doi.org/10.1109/TGRS.2013.2241444

    Article  Google Scholar 

  28. Girin L, Leglaive S, Bie X, Diard J, Hueber T, Alameda-Pineda X (2020) Dynamical variational autoencoders: a comprehensive review. http://arxiv.org/abs/2008.12595 (accessed October 25, 2020)

  29. Anirudh R, Thiagarajan JJ, Kailkhura B, Bremer PT (2020) MimicGAN: robust projection onto image manifolds with corruption mimicking. Int J Comput Vis 128:2459–2477. https://doi.org/10.1007/s11263-020-01310-5

    Article  MATH  Google Scholar 

  30. Emami H, Aliabadi MM, Dong M, Chinnam RB (2019) SPA-GAN: spatial attention GAN for image-to-image translation, IEEE Trans Multimed, 1–1. http://arxiv.org/abs/1908.06616 (accessed October 25, 2020)

  31. Hang R, Zhou F, Liu Q, Ghamisi P (2020) Classification of hyperspectral images via multitask generative adversarial networks. IEEE Trans Geosci Remote Sens. https://doi.org/10.1109/tgrs.2020.3003341

    Article  Google Scholar 

  32. Jiang T, Li Y, Xie W, Du Q (2020) Discriminative reconstruction constrained generative adversarial network for hyperspectral anomaly detection. IEEE Trans Geosci Remote Sens 58:4666–4679. https://doi.org/10.1109/TGRS.2020.2965961

    Article  Google Scholar 

  33. Gui J, Sun Z, Wen Y, Tao D, Ye J (2020) A review on generative adversarial networks: algorithms, theory, and applications, http://arxiv.org/abs/2001.06937 (accessed October 25, 2020)

  34. Tschannen M, Bachem O, Lucic M (2018) Recent advances in autoencoder-based representation learning. http://arxiv.org/abs/1812.05069 (accessed October 26, 2020)

  35. Hoshen Y (2018) Non-adversarial mapping with VAES, In: Adv Neural Inf Process Syst pp 7528–7537

  36. Kang J, Fernandez-Beltran R, Duan P, Liu S, Plaza AJ (2020) Deep unsupervised embedding for remotely sensed images based on spatially augmented momentum contrast. IEEE Trans Geosci Remote Sens. https://doi.org/10.1109/tgrs.2020.3007029

    Article  Google Scholar 

  37. Peng X, Zhu H, Feng J, Shen C, Zhang H, Zhou JT (2019) Deep Clustering With Sample-Assignment Invariance Prior. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/tnnls.2019.2958324

    Article  Google Scholar 

  38. Honke G, Higgins I, Thigpen N, Miskovic V, Link K, Duan S, Gupta P, Klawohn J, Hajcak G (2020) Representation learning for improved interpretability and classification accuracy of clinical factors from EEG. http://arxiv.org/abs/2010.15274 (accessed November 25, 2020)

  39. Han P, Li G, Skulstad R, Skjong S, Zhang H (2020) A deep learning approach to detect and isolate thruster failures for dynamically positioned vessels using motion data. IEEE Trans Instrum Meas. https://doi.org/10.1109/tim.2020.3016413

    Article  Google Scholar 

  40. Kang Z, Lu X, Liang J, Bai K, Xu Z (2020) Relation-guided representation learning, Neural Netw. 131: 93–102. http://arxiv.org/abs/2007.05742 (accessed October 26, 2020).

  41. Charte D, Charte F, del Jesus MJ, Herrera F (2020) An analysis on the use of autoencoders for representation learning: Fundamentals, learning task case studies, explainability and challenges. Neurocomputing 404:93–107. https://doi.org/10.1016/j.neucom.2020.04.057

    Article  Google Scholar 

  42. Mou L, Zhu XX (2020) Learning to pay attention on spectral domain: a spectral attention module-based convolutional network for hyperspectral image classification. IEEE Trans Geosci Remote Sens 58:110–122. https://doi.org/10.1109/TGRS.2019.2933609

    Article  Google Scholar 

  43. Karim F, Majumdar S, Darabi H, Harford S (2018) Multivariate LSTM-FCNs for time series classification. Neural Netw 116:237–245. https://doi.org/10.1016/j.neunet.2019.04.014

    Article  Google Scholar 

  44. Rifai S, Vincent P, Muller X, Glorot X, Bengio Y (2011) Contractive auto-encoders: Explicit invariance during feature extraction, In: Proc 28th Int Conf Mach Learn ICML 2011 pp 833–840

  45. Hamdi SM, Angryk R (2020) Interpretable feature learning of graphs using tensor decomposition, In: Institute of Electrical and Electronics Engineers (IEEE), pp 270–279. https://doi.org/10.1109/icdm.2019.00037

  46. Rudolph M, Wandt B, Rosenhahn B (2019) Structuring autoencoders, In: Proc 2019 Int Conf Comput Vis Work ICCVW 2019, Institute of Electrical and Electronics Engineers Inc., pp 615–623. https://doi.org/10.1109/ICCVW.2019.00075

  47. Zhong Y, Deng W (2019) Adversarial learning with margin-based triplet embedding regularization, In: Proc IEEE Int Conf Comput Vis Institute of Electrical and Electronics Engineers Inc., pp 6548–6557. https://doi.org/10.1109/ICCV.2019.00665

  48. Zhuang F, Cheng X, Luo P, Pan SJ, He Q (2017) Supervised representation learning with double encoding-layer autoencoder for transfer learning. ACM Trans Intell Syst Technol 9:1–17. https://doi.org/10.1145/3108257

    Article  Google Scholar 

  49. Burgess CP, Higgins I, Pal A, Matthey L, Watters N, Desjardins G, Lerchner A (2018) Understanding disentangling in $\beta$-VAE. http://arxiv.org/abs/1804.03599 (accessed November 25, 2020)

  50. Chen TQ, Li X, Grosse R, Duvenaud D (2018) Isolating sources of disentanglement in variational autoencoders, In: 6th Int Conf Learn Represent. ICLR 2018 Work Track Proc

  51. Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P (2016) InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets, In: Adv Neural Inf Process Syst, pp 2180–2188. https://doi.org/10.5555/3157096.3157340

  52. Gaujac B, Feige I, Barber D (2020) Learning disentangled representations with the Wasserstein Autoencoder. http://arxiv.org/abs/2010.03459 (accessed November 25, 2020)

  53. Higgins I, Chang L, Langston V, Hassabis D, Summerfield C, Tsao D, Botvinick M (2020) Unsupervised deep learning identifies semantic disentanglement in single inferotemporal neurons, ArXiv. http://arxiv.org/abs/2006.14304 (accessed November 25, 2020)

  54. Higgins I, Matthey L, Pal A, Burgess C, Glorot X, Botvinick M, Mohamed S, Lerchner A (2017) Β-VAE: Learning basic visual concepts with a constrained variational framework, In: 5th Int Conf Learn Represent. ICLR 2017 Conf Track Proc

  55. Higgins I, Amos D, Pfau D, Racaniere S, Matthey L, Rezende D, Lerchner A (2018) Towards a definition of disentangled representations, ArXiv. http://arxiv.org/abs/1812.02230 (accessed November 25, 2020)

  56. Kim H, Mnih A (2018) Disentangling by Factorising, In: 35th Int Conf Mach Learn ICML 2018, International Machine Learning Society (IMLS), pp 4153–4171. http://arxiv.org/abs/1802.05983 (accessed November 25, 2020)

  57. Simonyan K, Vedaldi A, Zisserman A (2014) Deep inside convolutional networks: Visualising image classification models and saliency maps, In: 2nd Int Conf Learn Represent ICLR 2014 Work. Track Proc

  58. Adler P, Falk C, Friedler SA, Nix T, Rybeck G, Scheidegger C, Smith B, Venkatasubramanian S (2018) Auditing black-box models for indirect influence. Knowl Inf Syst 54:95–122. https://doi.org/10.1007/s10115-017-1116-3

    Article  Google Scholar 

  59. Shrikumar A, Greenside P, Kundaje A (2017) Learning important features through propagating activation differences, In: 34th Int Conf Mach Learn ICML 2017. 7: 4844–4866. http://arxiv.org/abs/1704.02685 (accessed November 20, 2020)

  60. Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10:e0130140. https://doi.org/10.1371/journal.pone.0130140

    Article  Google Scholar 

  61. Datta A, Sen S, Zick Y (2016) Algorithmic transparency via quantitative input influence: theory and experiments with learning systems, In: Proc 2016 IEEE Symp Secur Privacy SP 2016, Institute of Electrical and Electronics Engineers Inc., pp 598–617. https://doi.org/10.1109/SP.2016.42

  62. Henelius A, Puolamäki K, Boström H, Asker L, Papapetrou P (2014) A peek into the black box: exploring classifiers by randomization. Data Min Knowl Discov 28:1503–1529. https://doi.org/10.1007/s10618-014-0368-8

    Article  MathSciNet  Google Scholar 

  63. Lundberg SM, Allen PG, Lee SI (2017) A Unified Approach to Interpreting Model Predictions. https://github.com/slundberg/shap (accessed November 20, 2020)

  64. Chen C, Li O, Tao C, Barnett AJ, Su J, Rudin C (2019) This looks like that: deep learning for interpretable image recognition

  65. Pfau D, Higgins I, Botev A, Racanière S (2020) Disentangling by Subspace Diffusion, ArXiv

  66. Ribeiro MT, Singh S, Guestrin C (2016) “Why should i trust you?” Explaining the predictions of any classifier, In: Proc ACM SIGKDD Int Conf Knowl Discov Data Min, Association for Computing Machinery, New York, NY, USA, pp 1135–1144. https://doi.org/10.1145/2939672.2939778

  67. Baehrens D, Harmeling S, Kawanabe M, Hansen Khansen K, Edward Rasmussen C (2010) How to explain individual classification decisions timon Schroeter * Klaus-Robert M ¨ uller, https://doi.org/10.5555/1756006.1859912

  68. Zhou Z, Sun M, Chen J (2019) A model-agnostic approach for explaining the predictions on clustered data, in: Proc. - IEEE Int Conf Data Mining ICDM, Institute of Electrical and Electronics Engineers Inc., pp 1528–1533. https://doi.org/10.1109/ICDM.2019.00202

  69. Jiarpakdee J, Tantithamthavorn C, Dam HK, Grundy J (2020) An empirical study of model-agnostic techniques for defect prediction models. IEEE Trans Softw Eng. https://doi.org/10.1109/tse.2020.2982385

    Article  Google Scholar 

  70. Grezmak J, Zhang J, Wang P, Loparo KA, Gao RX (2020) Interpretable convolutional neural network through layer-wise relevance propagation for machine fault diagnosis. IEEE Sens J 20:3172–3181. https://doi.org/10.1109/JSEN.2019.2958787

    Article  Google Scholar 

  71. Wachter S, Mittelstadt B, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the GDPR, SSRN Electron J. http://arxiv.org/abs/1711.00399 (accessed November 20, 2020)

  72. Barredo-Arrieta A, Del Ser J (2020) Plausible counterfactuals: auditing deep learning classifiers with realistic adversarial examples, In: Proc Int Jt Conf Neural Networks. http://arxiv.org/abs/2003.11323 (accessed November 20, 2020)

  73. Aravantinos V, Diehl F (2018) Traceability of Deep Neural Networks, ArXiv. http://arxiv.org/abs/1812.06744 (accessed November 20, 2020)

  74. Al-Hmouz R, Pedrycz W, Balamash A, Morfeq A (2019) Logic-driven autoencoders. Knowl Based Syst 183:104874. https://doi.org/10.1016/j.knosys.2019.104874

    Article  Google Scholar 

  75. Ghosh P, Sajjadi MSM, Vergari A, Black M, Schölkopf B (2019) From Variational to Deterministic Autoencoders. http://arxiv.org/abs/1903.12436 (accessed October 25, 2020)

  76. Zhang X, Yao L, Yuan F (2019) Adversarial Variational Embedding for Robust Semi-supervised Learning, In: Proc ACM SIGKDD Int Conf Knowl Discov Data Min, 139–147. https://doi.org/10.1145/3292500.3330966

  77. Makhzani A, Shlens J, Jaitly N, Goodfellow I, Frey B (2015) Adversarial Autoencoders, http://arxiv.org/abs/1511.05644 (accessed November 23, 2020).

  78. Arun PV, Buddhiraju KM, Porwal A, Chanussot J (2020) CNN-based super-resolution of hyperspectral images. IEEE Trans Geosci Remote Sens 58:6106–6121. https://doi.org/10.1109/tgrs.2020.2973370

    Article  Google Scholar 

  79. Bochinski E, Senst T, Sikora T (2018) Hyper-parameter optimization for convolutional neural network committees based on evolutionary algorithms, In: Proc Int Conf Image Process. ICIP, IEEE Computer Society, pp 3924–3928. https://doi.org/10.1109/ICIP.2017.8297018.

  80. Herrmann I, Shapira U, Kinast S, Karnieli A, Bonfil DJ (2013) Ground-level hyperspectral imagery for detecting weeds in wheat fields. Precis Agric 14:637–659. https://doi.org/10.1007/s11119-013-9321-x

    Article  Google Scholar 

  81. Zhang Z, Duan F, Sole-Casals J, Dinares-Ferran J, Cichocki A, Yang Z, Sun Z (2019) A novel deep learning approach with data augmentation to classify motor imagery signals. IEEE Access 7:15945–15954. https://doi.org/10.1109/ACCESS.2019.2895133

    Article  Google Scholar 

  82. Fawaz HI, Forestier G, Weber J, Idoumghar L, Muller P-A (2018) Deep learning for time series classification: a review. Data Min Knowl Discov 33:917–963. https://doi.org/10.1007/s10618-019-00619-1

    Article  MathSciNet  MATH  Google Scholar 

  83. Imani M, Ghassemian H (2020) An overview on spectral and spatial information fusion for hyperspectral image classification: current trends and challenges. Inf Fusion 59:59–83. https://doi.org/10.1016/j.inffus.2020.01.007

    Article  Google Scholar 

  84. Cubuk ED, Zoph B, Shlens J, Le QV (2019) RandAugment: Practical automated data augmentation with a reduced search space, In: IEEE Comput Soc Conf Comput Vis Pattern Recognit Work. 2020-June 3008–3017. http://arxiv.org/abs/1909.13719 (accessed October 25, 2020)

  85. Ghamisi P, Yokoya N, Li J, Liao W, Liu S, Plaza J, Rasti B, Plaza A (2017) Advances in hyperspectral image and signal processing: a comprehensive overview of the state of the art. IEEE Geosci Remote Sens Mag 5:37–78. https://doi.org/10.1109/MGRS.2017.2762087

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arnon Karnieli.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arun, P.V., Karnieli, A. Learning of physically significant features from earth observation data: an illustration for crop classification and irrigation scheme detection. Neural Comput & Applic 34, 10929–10948 (2022). https://doi.org/10.1007/s00521-022-07019-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-022-07019-5

Keywords

Navigation