Abstract
Supporting knowledge workers involved in the execution of unstructured Knowledge-Intensive Processes by providing context-specific recommendations remains an interesting challenge. Case data that represents expert decisions recorded in the past can be exploited for building a decision support tool for knowledge workers that can recommend which tasks to execute next. Reinforcement learning (RL) provides a framework for learning from interaction with the environment in order to achieve a certain process goal. RL has widely been used to model sequential decision problems and has shown great promise in solving large scale complex problems with long time horizons, partial observability, and high dimensionality of observation and action spaces [5]. In this paper, we propose a novel framework based on RL aimed at supporting knowledge workers by recommending the optimal course of action to the knowledge worker.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Van der Aalst, W.M., Berens, P.: Beyond workflow management: product-driven case handling. In: Proceedings of the 2001 International ACM SIGGROUP Conference on Supporting Group Work, pp. 42–51 (2001)
Van der Aalst, W.M., Stoffele, M., Wamelink, J.: Case handling in construction. Autom. Constr. 12(3), 303–320 (2003)
Agarwal, R., Schuurmans, D., Norouzi, M.: An optimistic perspective on offline reinforcement learning (2020)
Arora, S., Doshi, P.: A survey of inverse reinforcement learning: challenges, methods and progress. Artif. Intell. 297, 103500 (2021)
Berner, C., et al.: Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680 (2019)
Di Ciccio, C., Marrella, A., Russo, A.: Knowledge-intensive processes: an overview of contemporary approaches. In: KiBP@ KR, pp. 33–47 (2012)
Di Ciccio, C., Marrella, A., Russo, A.: Knowledge-intensive processes: characteristics, requirements and analysis of contemporary approaches. J. Data Semant. 4, 29–57 (03 2015). https://doi.org/10.1007/s13740-014-0038-4
Dulac-Arnold, G., Mankowitz, D., Hester, T.: Challenges of real-world reinforcement learning. arXiv preprint arXiv:1904.12901 (2019)
Fischer, L.: How Knowledge Workers Get Things Done: Real-World Adaptive Case Management. Future Strategies Inc. (2012)
Gauci, J., et al.: Horizon: Facebook’s open source applied reinforcement learning platform. arXiv preprint arXiv:1811.00260 (2018)
Gottesman, O., et al.: Guidelines for reinforcement learning in healthcare. Nat. Med. 25(1), 16–18 (2019)
Gottesman, O., et al.: Evaluating reinforcement learning algorithms in observational health settings. arXiv preprint arXiv:1805.12298 (2018)
Gröger, C., Schwarz, H., Mitschang, B.: Prescriptive analytics for recommendation-based business process optimization. In: Abramowicz, W., Kokkinaki, A. (eds.) BIS 2014. LNBIP, vol. 176, pp. 25–37. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-06695-0_3
Günther, C.W., Van der Aalst, W.M.: Process mining in case handling systems. In: Proceeding of Multikonferenz Wirtschaftsinformatik (2006)
Hauder, M., Pigat, S., Matthes, F.: Research challenges in adaptive case management: a literature review. In: 2014 IEEE 18th International Enterprise Distributed Object Computing Conference Workshops and Demonstrations, pp. 98–107. IEEE (2014)
Horvitz, D.G., Thompson, D.J.: A generalization of sampling without replacement from a finite universe. J. Am. Stat. Assoc. 47(260), 663–685 (1952)
Johnson, A.E., et al.: MIMIC-III, a freely accessible critical care database. Sci. Data 3(1), 1–9 (2016)
Levine, S., Kumar, A., Tucker, G., Fu, J.: Offline reinforcement learning: tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643 (2020)
Marin, M.A., Hauder, M., Matthes, F.: Case management: an evaluation of existing approaches for knowledge-intensive processes. In: Reichert, M., Reijers, H.A. (eds.) BPM 2015. LNBIP, vol. 256, pp. 5–16. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-42887-1_1
Motahari-Nezhad, H.R., Swenson, K.D.: Adaptive case management: overview and research challenges. In: 2013 IEEE 15th Conference on Business Informatics, pp. 264–269. IEEE (2013)
Motahari-Nezhad, H.R., Bartolini, C.: Next best step and expert recommendation for collaborative processes in IT service management. In: Rinderle-Ma, S., Toumani, F., Wolf, K. (eds.) BPM 2011. LNCS, vol. 6896, pp. 50–61. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-23059-2_7
Ng, A.Y., Russell, S.J., et al.: Algorithms for inverse reinforcement learning. In: ICML, vol. 1, p. 2 (2000)
Ostrovski, G., Bellemare, M.G., Oord, A., Munos, R.: Count-based exploration with neural density models. In: International Conference on Machine Learning, pp. 2721–2730. PMLR (2017)
Precup, D., Sutton, R.S., Singh, S.P.: Eligibility traces for off-policy policy evaluation. In: Proceedings of the 17th International Conference on Machine Learning, ICML 2000, pp. 759–766. Morgan Kaufmann Publishers Inc., San Francisco (2000)
Raghu, A., et al.: Behaviour policy estimation in off-policy policy evaluation: calibration matters. arXiv preprint arXiv:1807.01066 (2018)
Rivers, E., et al.: Early goal-directed therapy in the treatment of severe sepsis and septic shock. N. Engl. J. Med. 345(19), 1368–1377 (2001)
Santipuri, M., Ghose, A., Dam, H.K., Roy, S.: Goal orchestrations: modelling and mining flexible business processes. In: Mayr, H.C., Guizzardi, G., Ma, H., Pastor, O. (eds.) ER 2017. LNCS, vol. 10650, pp. 373–387. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-69904-2_29
Schonenberg, H., Weber, B., van Dongen, B., van der Aalst, W.: Supporting flexible processes through recommendations based on history. In: Dumas, M., Reichert, M., Shan, M.-C. (eds.) BPM 2008. LNCS, vol. 5240, pp. 51–66. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85758-7_7
Sindhgatta Rajan, R.: Data-driven and context-aware process provisioning. Ph.D. thesis, School of Computing and IT, University of Wollongong (2018)
Singer, M., et al.: The third international consensus definitions for sepsis and septic shock (Sepsis-3). J. Am. Med. Assoc. 315(8), 801–810 (2016)
Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press (2018)
Thomas, P., Brunskill, E.: Data-efficient off-policy policy evaluation for reinforcement learning. In: International Conference on Machine Learning, pp. 2139–2148. PMLR (2016)
Van Seijen, H., Fatemi, M., Romoff, J., Laroche, R., Barnes, T., Tsang, J.: Hybrid reward architecture for reinforcement learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS 2017, pp. 5398–5408. Curran Associates Inc., Red Hook (2017)
Wang, S., McDermott, M.B., Chauhan, G., Ghassemi, M., Hughes, M.C., Naumann, T.: Mimic-extract: a data extraction, preprocessing, and representation pipeline for MIMIC-III. In: Proceedings of the ACM Conference on Health, Inference, and Learning, pp. 222–235 (2020)
Weber, I., Hoffmann, J., Mendling, J.: Beyond soundness: on the verification of semantic business process models. Distrib. Parallel Databases 27(3), 271–343 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Khan, A., Ghose, A., Dam, H. (2021). Decision Support for Knowledge Intensive Processes Using RL Based Recommendations. In: Polyvyanyy, A., Wynn, M.T., Van Looy, A., Reichert, M. (eds) Business Process Management Forum. BPM 2021. Lecture Notes in Business Information Processing, vol 427. Springer, Cham. https://doi.org/10.1007/978-3-030-85440-9_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-85440-9_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-85439-3
Online ISBN: 978-3-030-85440-9
eBook Packages: Computer ScienceComputer Science (R0)