Abstract
Dynamic task assignment involves assigning arriving tasks to a limited number of resources in order to minimize the overall cost of the assignments. To achieve optimal task assignment, it is necessary to model the assignment problem first. While there exist separate formalisms, specifically Markov Decision Processes and (Colored) Petri Nets, to model, execute, and solve different aspects of the problem, there is no integrated modeling technique. To address this gap, this paper proposes Action-Evolution Petri Nets (A-E PN) as a framework for modeling and solving dynamic task assignment problems. A-E PN provides a unified modeling technique that can represent all elements of dynamic task assignment problems. Moreover, A-E PN models are executable, which means they can be used to learn close-to-optimal assignment policies through Reinforcement Learning (RL) without additional modeling effort. To evaluate the framework, we define a taxonomy of archetypical assignment problems. We show for three cases that A-E PN can be used to learn close-to-optimal assignment policies. Our results suggest that A-E PN can be used to model and solve a broad range of dynamic task assignment problems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
The code is publicly available in https://github.com/bpogroup/aepn-project.
References
Kuhn, H.W.: The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 2(1ā2), 83ā97 (1955)
GĆ¼lpınar, N., ĆanakoÄlu, E., Branke, J.: Heuristics for the stochastic dynamic task-resource allocation problem with retry opportunities. Eur. J. Oper. Res. 266(1), 291ā303 (2018)
Hu, L., Liu, Z., Hu, W., Wang, Y., Tan, J.: Petri-net-based dynamic scheduling of flexible manufacturing system via deep reinforcement learning with graph convolutional network. J. Manuf. Syst. 55, 1ā14 (2020)
Spivey, M.Z., Powell, W.B.: The dynamic assignment problem. Transp. Sci. 38(4), 399ā419 (2004)
Sutton, R., Barto, A.: Reinforcement Learning: An Introduction, p. 352. The MIT Press, Cambridge (2014)
Jensen, K.: A brief introduction to coloured Petri Nets. In: Brinksma, E. (ed.) TACAS 1997. LNCS, vol. 1217, pp. 203ā208. Springer, Heidelberg (1997). https://doi.org/10.1007/BFb0035389
Bause, F., Kritzinger, P.: Stochastic Petri nets: an introduction to the theory. ACM SIGMETRICS Perform. Eval. Rev. 26(2), 2ā3 (1998)
Eboli, M.G., Cozman, F.G.: Markov decision processes from colored Petri nets. In: da Rocha Costa, A.C., Vicari, R.M., Tonidandel, F. (eds.) SBIA 2010. LNCS (LNAI), vol. 6404, pp. 72ā81. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-16138-4_8
Beccuti, M., Franceschinis, G., Haddad, S.: Markov decision Petri net and Markov decision well-formed net formalisms. In: Kleijn, J., Yakovlev, A. (eds.) ICATPN 2007. LNCS, vol. 4546, pp. 43ā62. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73094-1_6
Qiu, Q., Wu, Q., Pedram, M.: Dynamic power management of complex systems using generalized stochastic Petri nets. In: Proceedings of the 37th Conference on Design Automation - DAC 2000, pp. 352ā356. ACM Press (2000)
Drakaki, M., Tzionas, P.: Manufacturing scheduling using colored Petri nets and reinforcement learning. Appl. Sci. 7(2), 136 (2017)
Riedmann, S., Harb, J., Hoher, S.: Timed coloured Petri net simulation model for reinforcement learning in the context of production systems. In: Behrens, B.-A., Brosius, A., Drossel, W.-G., Hintze, W., Ihlenfeldt, S., Nyhuis, P. (eds.) WGP 2021. LNPE, pp. 457ā465. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-78424-9_51
Jensen, K., Rozenberg, G.: High-Level Petri Nets: Theory and Application. Springer, Heidelberg (1991). https://doi.org/10.1007/978-3-642-84524-6
Jacobsen, L., Jacobsen, M., MĆøller, M.H., Srba, J.: Verification of timed-arc Petri nets. In: ÄernĆ”, I., et al. (eds.) SOFSEM 2011. LNCS, vol. 6543, pp. 46ā72. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-18381-2_4
CPN Tools. https://cpntools.org/
Silver, D., et al.: Mastering the game of Go without human knowledge. Nature 550(7676), 354ā359 (2017)
Kalashnikov, D., et al.: QT-Opt: scalable deep reinforcement learning for vision-based robotic manipulation, November 2018
Nian, R., Liu, J., Huang, B.: A review on reinforcement learning: introduction and applications in industrial process control. Comput. Chem. Eng. 139, 106886 (2020)
Yu, C., Liu, J., Nemati, S.: Reinforcement learning in healthcare: a survey, April 2020. arXiv:1908.08796 [cs]
Pentico, D.W.: Assignment problems: a golden anniversary survey. Eur. J. Oper. Res. 176(2), 774ā793 (2007)
Gymnasium. https://gymnasium.farama.org/
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal Policy Optimization Algorithms, August 2017. arXiv:1707.06347 [cs]
SB3-contr. https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Acknowledgement
The research that led to this publication was partly funded by the European Supply Chain Forum (ESCF) and the Eindhoven Artificial Intelligence Systems Institute (EAISI) under the AI Planners of the Future program.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lo Bianco, R., Dijkman, R., Nuijten, W., van Jaarsveld, W. (2023). Action-Evolution Petri Nets: A Framework for Modeling and Solving Dynamic Task Assignment Problems. In: Di Francescomarino, C., Burattin, A., Janiesch, C., Sadiq, S. (eds) Business Process Management. BPM 2023. Lecture Notes in Computer Science, vol 14159. Springer, Cham. https://doi.org/10.1007/978-3-031-41620-0_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-41620-0_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-41619-4
Online ISBN: 978-3-031-41620-0
eBook Packages: Computer ScienceComputer Science (R0)