Skip to main content

Decision Support for Knowledge Intensive Processes Using RL Based Recommendations

  • Conference paper
  • First Online:
Business Process Management Forum (BPM 2021)

Part of the book series: Lecture Notes in Business Information Processing ((LNBIP,volume 427))

Included in the following conference series:

Abstract

Supporting knowledge workers involved in the execution of unstructured Knowledge-Intensive Processes by providing context-specific recommendations remains an interesting challenge. Case data that represents expert decisions recorded in the past can be exploited for building a decision support tool for knowledge workers that can recommend which tasks to execute next. Reinforcement learning (RL) provides a framework for learning from interaction with the environment in order to achieve a certain process goal. RL has widely been used to model sequential decision problems and has shown great promise in solving large scale complex problems with long time horizons, partial observability, and high dimensionality of observation and action spaces [5]. In this paper, we propose a novel framework based on RL aimed at supporting knowledge workers by recommending the optimal course of action to the knowledge worker.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Van der Aalst, W.M., Berens, P.: Beyond workflow management: product-driven case handling. In: Proceedings of the 2001 International ACM SIGGROUP Conference on Supporting Group Work, pp. 42–51 (2001)

    Google Scholar 

  2. Van der Aalst, W.M., Stoffele, M., Wamelink, J.: Case handling in construction. Autom. Constr. 12(3), 303–320 (2003)

    Article  Google Scholar 

  3. Agarwal, R., Schuurmans, D., Norouzi, M.: An optimistic perspective on offline reinforcement learning (2020)

    Google Scholar 

  4. Arora, S., Doshi, P.: A survey of inverse reinforcement learning: challenges, methods and progress. Artif. Intell. 297, 103500 (2021)

    Google Scholar 

  5. Berner, C., et al.: Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019)

  6. Di Ciccio, C., Marrella, A., Russo, A.: Knowledge-intensive processes: an overview of contemporary approaches. In: KiBP@ KR, pp. 33–47 (2012)

    Google Scholar 

  7. Di Ciccio, C., Marrella, A., Russo, A.: Knowledge-intensive processes: characteristics, requirements and analysis of contemporary approaches. J. Data Semant. 4, 29–57 (03 2015). https://doi.org/10.1007/s13740-014-0038-4

  8. Dulac-Arnold, G., Mankowitz, D., Hester, T.: Challenges of real-world reinforcement learning. arXiv preprint arXiv:1904.12901 (2019)

  9. Fischer, L.: How Knowledge Workers Get Things Done: Real-World Adaptive Case Management. Future Strategies Inc. (2012)

    Google Scholar 

  10. Gauci, J., et al.: Horizon: Facebook’s open source applied reinforcement learning platform. arXiv preprint arXiv:1811.00260 (2018)

  11. Gottesman, O., et al.: Guidelines for reinforcement learning in healthcare. Nat. Med. 25(1), 16–18 (2019)

    Article  Google Scholar 

  12. Gottesman, O., et al.: Evaluating reinforcement learning algorithms in observational health settings. arXiv preprint arXiv:1805.12298 (2018)

  13. Gröger, C., Schwarz, H., Mitschang, B.: Prescriptive analytics for recommendation-based business process optimization. In: Abramowicz, W., Kokkinaki, A. (eds.) BIS 2014. LNBIP, vol. 176, pp. 25–37. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-06695-0_3

    Chapter  Google Scholar 

  14. Günther, C.W., Van der Aalst, W.M.: Process mining in case handling systems. In: Proceeding of Multikonferenz Wirtschaftsinformatik (2006)

    Google Scholar 

  15. Hauder, M., Pigat, S., Matthes, F.: Research challenges in adaptive case management: a literature review. In: 2014 IEEE 18th International Enterprise Distributed Object Computing Conference Workshops and Demonstrations, pp. 98–107. IEEE (2014)

    Google Scholar 

  16. Horvitz, D.G., Thompson, D.J.: A generalization of sampling without replacement from a finite universe. J. Am. Stat. Assoc. 47(260), 663–685 (1952)

    Article  MathSciNet  Google Scholar 

  17. Johnson, A.E., et al.: MIMIC-III, a freely accessible critical care database. Sci. Data 3(1), 1–9 (2016)

    Article  Google Scholar 

  18. Levine, S., Kumar, A., Tucker, G., Fu, J.: Offline reinforcement learning: tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643 (2020)

  19. Marin, M.A., Hauder, M., Matthes, F.: Case management: an evaluation of existing approaches for knowledge-intensive processes. In: Reichert, M., Reijers, H.A. (eds.) BPM 2015. LNBIP, vol. 256, pp. 5–16. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-42887-1_1

    Chapter  Google Scholar 

  20. Motahari-Nezhad, H.R., Swenson, K.D.: Adaptive case management: overview and research challenges. In: 2013 IEEE 15th Conference on Business Informatics, pp. 264–269. IEEE (2013)

    Google Scholar 

  21. Motahari-Nezhad, H.R., Bartolini, C.: Next best step and expert recommendation for collaborative processes in IT service management. In: Rinderle-Ma, S., Toumani, F., Wolf, K. (eds.) BPM 2011. LNCS, vol. 6896, pp. 50–61. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23059-2_7

    Chapter  Google Scholar 

  22. Ng, A.Y., Russell, S.J., et al.: Algorithms for inverse reinforcement learning. In: ICML, vol. 1, p. 2 (2000)

    Google Scholar 

  23. Ostrovski, G., Bellemare, M.G., Oord, A., Munos, R.: Count-based exploration with neural density models. In: International Conference on Machine Learning, pp. 2721–2730. PMLR (2017)

    Google Scholar 

  24. Precup, D., Sutton, R.S., Singh, S.P.: Eligibility traces for off-policy policy evaluation. In: Proceedings of the 17th International Conference on Machine Learning, ICML 2000, pp. 759–766. Morgan Kaufmann Publishers Inc., San Francisco (2000)

    Google Scholar 

  25. Raghu, A., et al.: Behaviour policy estimation in off-policy policy evaluation: calibration matters. arXiv preprint arXiv:1807.01066 (2018)

  26. Rivers, E., et al.: Early goal-directed therapy in the treatment of severe sepsis and septic shock. N. Engl. J. Med. 345(19), 1368–1377 (2001)

    Article  Google Scholar 

  27. Santipuri, M., Ghose, A., Dam, H.K., Roy, S.: Goal orchestrations: modelling and mining flexible business processes. In: Mayr, H.C., Guizzardi, G., Ma, H., Pastor, O. (eds.) ER 2017. LNCS, vol. 10650, pp. 373–387. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-69904-2_29

    Chapter  Google Scholar 

  28. Schonenberg, H., Weber, B., van Dongen, B., van der Aalst, W.: Supporting flexible processes through recommendations based on history. In: Dumas, M., Reichert, M., Shan, M.-C. (eds.) BPM 2008. LNCS, vol. 5240, pp. 51–66. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85758-7_7

    Chapter  Google Scholar 

  29. Sindhgatta Rajan, R.: Data-driven and context-aware process provisioning. Ph.D. thesis, School of Computing and IT, University of Wollongong (2018)

    Google Scholar 

  30. Singer, M., et al.: The third international consensus definitions for sepsis and septic shock (Sepsis-3). J. Am. Med. Assoc. 315(8), 801–810 (2016)

    Article  Google Scholar 

  31. Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press (2018)

    Google Scholar 

  32. Thomas, P., Brunskill, E.: Data-efficient off-policy policy evaluation for reinforcement learning. In: International Conference on Machine Learning, pp. 2139–2148. PMLR (2016)

    Google Scholar 

  33. Van Seijen, H., Fatemi, M., Romoff, J., Laroche, R., Barnes, T., Tsang, J.: Hybrid reward architecture for reinforcement learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 5398–5408. Curran Associates Inc., Red Hook (2017)

    Google Scholar 

  34. Wang, S., McDermott, M.B., Chauhan, G., Ghassemi, M., Hughes, M.C., Naumann, T.: Mimic-extract: a data extraction, preprocessing, and representation pipeline for MIMIC-III. In: Proceedings of the ACM Conference on Health, Inference, and Learning, pp. 222–235 (2020)

    Google Scholar 

  35. Weber, I., Hoffmann, J., Mendling, J.: Beyond soundness: on the verification of semantic business process models. Distrib. Parallel Databases 27(3), 271–343 (2010)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Asjad Khan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Khan, A., Ghose, A., Dam, H. (2021). Decision Support for Knowledge Intensive Processes Using RL Based Recommendations. In: Polyvyanyy, A., Wynn, M.T., Van Looy, A., Reichert, M. (eds) Business Process Management Forum. BPM 2021. Lecture Notes in Business Information Processing, vol 427. Springer, Cham. https://doi.org/10.1007/978-3-030-85440-9_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85440-9_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85439-3

  • Online ISBN: 978-3-030-85440-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics