Skip to main content

Exploring Interpretability for Predictive Process Analytics

  • Conference paper
  • First Online:
Service-Oriented Computing (ICSOC 2020)

Abstract

In the context of business process management, predictive analytics has been applied to making predictions about the future state of an ongoing business process instance, for example, when will the process instance complete and what will be the outcome upon completion. Machine learning models can be trained on event logs of historical process execution to build the underlying predictive models. Multiple techniques have been proposed so far which encode the information available in an event log and construct input features required to train a predictive model. While accuracy has been a dominant criterion in the choice of various techniques, these techniques are often applied as a black-box in building predictive models. In this paper, we derive explanations using interpretable machine learning techniques to compare the suitability of multiple predictive models of high accuracy. The explanations allow us to gain an understanding of the underlying reasons for a prediction and highlight scenarios where accuracy alone may not be sufficient in assessing the suitability of techniques used to encode event log data to features used by a predictive model. Findings from this study further motivate the need to incorporate interpretability in predictive process analytics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Teinemaa, I., Dumas, M., Rosa, M.L., Maggi, F.M.: Outcome-oriented predictive process monitoring: Review and benchmark. TKDD 13(2), 17:1–17:57 (2019)

    Article  Google Scholar 

  2. Evermann, J., Rehse, J., Fettke, P.: Predicting process behaviour using deep learning. Decis. Support Syst. 100, 129–140 (2017)

    Article  Google Scholar 

  3. Verenich, I., Dumas, M., Rosa, M.L., Maggi, F.M., Teinemaa, I.: Survey and cross-benchmark comparison of remaining time prediction methods in business process monitoring. ACM TIST 10(4), 34:1–34:34 (2019)

    Google Scholar 

  4. Lakkaraju, H., et al.: Faithful and customizable explanations of black box models. In: Proceedings of the 2019 AAAI Conference on AIES 2019, pp. 131–138 (2019)

    Google Scholar 

  5. Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206–215 (2019)

    Article  Google Scholar 

  6. Guidotti, R., et al.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 93:1–93:42 (2018)

    Google Scholar 

  7. Rehse, J., Mehdiyev, N., Fettke, P.: Towards explainable process predictions for industry 4.0 in the dfki-smart-lego-factory. KI 33(2), 181–187 (2019)

    Google Scholar 

  8. Lipton, Z.C.: The mythos of model interpretability. CACM 61(10), 36–43 (2018)

    Article  Google Scholar 

  9. Molnar, C.: Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. Leanpub (2018)

    Google Scholar 

  10. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001)

    Article  MathSciNet  Google Scholar 

  11. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD, pp. 1135–1144 (2016)

    Google Scholar 

  12. Kaufman, S., Rosset, S., Perlich, C.: Leakage in data mining: formulation, detection, and avoidance. In: Proceedings of the 17th ACM SIGKDD, pp. 556–563 (2011)

    Google Scholar 

Download references

Acknowledgement

We thank ARC Discovery Grant DP190100314 for supporting part of this research. We also thank the authors of the two process monitoring benchmarks [1, 3] for the high quality code they released which allowed us to explore model interpretability for predictive process analytics.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chun Ouyang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sindhgatta, R., Ouyang, C., Moreira, C. (2020). Exploring Interpretability for Predictive Process Analytics. In: Kafeza, E., Benatallah, B., Martinelli, F., Hacid, H., Bouguettaya, A., Motahari, H. (eds) Service-Oriented Computing. ICSOC 2020. Lecture Notes in Computer Science(), vol 12571. Springer, Cham. https://doi.org/10.1007/978-3-030-65310-1_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-65310-1_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-65309-5

  • Online ISBN: 978-3-030-65310-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics