Skip to main content

Action-Evolution Petri Nets: A Framework for Modeling and Solving Dynamic Task Assignment Problems

  • Conference paper
  • First Online:
Business Process Management (BPM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14159))

Included in the following conference series:

  • 1604 Accesses

Abstract

Dynamic task assignment involves assigning arriving tasks to a limited number of resources in order to minimize the overall cost of the assignments. To achieve optimal task assignment, it is necessary to model the assignment problem first. While there exist separate formalisms, specifically Markov Decision Processes and (Colored) Petri Nets, to model, execute, and solve different aspects of the problem, there is no integrated modeling technique. To address this gap, this paper proposes Action-Evolution Petri Nets (A-E PN) as a framework for modeling and solving dynamic task assignment problems. A-E PN provides a unified modeling technique that can represent all elements of dynamic task assignment problems. Moreover, A-E PN models are executable, which means they can be used to learn close-to-optimal assignment policies through Reinforcement Learning (RL) without additional modeling effort. To evaluate the framework, we define a taxonomy of archetypical assignment problems. We show for three cases that A-E PN can be used to learn close-to-optimal assignment policies. Our results suggest that A-E PN can be used to model and solve a broad range of dynamic task assignment problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The code is publicly available in https://github.com/bpogroup/aepn-project.

References

  1. Kuhn, H.W.: The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 2(1ā€“2), 83ā€“97 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  2. GĆ¼lpınar, N., Ƈanakoğlu, E., Branke, J.: Heuristics for the stochastic dynamic task-resource allocation problem with retry opportunities. Eur. J. Oper. Res. 266(1), 291ā€“303 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  3. Hu, L., Liu, Z., Hu, W., Wang, Y., Tan, J.: Petri-net-based dynamic scheduling of flexible manufacturing system via deep reinforcement learning with graph convolutional network. J. Manuf. Syst. 55, 1ā€“14 (2020)

    Article  Google Scholar 

  4. Spivey, M.Z., Powell, W.B.: The dynamic assignment problem. Transp. Sci. 38(4), 399ā€“419 (2004)

    Article  Google Scholar 

  5. Sutton, R., Barto, A.: Reinforcement Learning: An Introduction, p. 352. The MIT Press, Cambridge (2014)

    MATH  Google Scholar 

  6. Jensen, K.: A brief introduction to coloured Petri Nets. In: Brinksma, E. (ed.) TACAS 1997. LNCS, vol. 1217, pp. 203ā€“208. Springer, Heidelberg (1997). https://doi.org/10.1007/BFb0035389

    Chapter  Google Scholar 

  7. Bause, F., Kritzinger, P.: Stochastic Petri nets: an introduction to the theory. ACM SIGMETRICS Perform. Eval. Rev. 26(2), 2ā€“3 (1998)

    Article  MATH  Google Scholar 

  8. Eboli, M.G., Cozman, F.G.: Markov decision processes from colored Petri nets. In: da Rocha Costa, A.C., Vicari, R.M., Tonidandel, F. (eds.) SBIA 2010. LNCS (LNAI), vol. 6404, pp. 72ā€“81. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-16138-4_8

    Chapter  Google Scholar 

  9. Beccuti, M., Franceschinis, G., Haddad, S.: Markov decision Petri net and Markov decision well-formed net formalisms. In: Kleijn, J., Yakovlev, A. (eds.) ICATPN 2007. LNCS, vol. 4546, pp. 43ā€“62. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73094-1_6

    Chapter  MATH  Google Scholar 

  10. Qiu, Q., Wu, Q., Pedram, M.: Dynamic power management of complex systems using generalized stochastic Petri nets. In: Proceedings of the 37th Conference on Design Automation - DAC 2000, pp. 352ā€“356. ACM Press (2000)

    Google Scholar 

  11. Drakaki, M., Tzionas, P.: Manufacturing scheduling using colored Petri nets and reinforcement learning. Appl. Sci. 7(2), 136 (2017)

    Article  Google Scholar 

  12. Riedmann, S., Harb, J., Hoher, S.: Timed coloured Petri net simulation model for reinforcement learning in the context of production systems. In: Behrens, B.-A., Brosius, A., Drossel, W.-G., Hintze, W., Ihlenfeldt, S., Nyhuis, P. (eds.) WGP 2021. LNPE, pp. 457ā€“465. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-78424-9_51

    Chapter  Google Scholar 

  13. Jensen, K., Rozenberg, G.: High-Level Petri Nets: Theory and Application. Springer, Heidelberg (1991). https://doi.org/10.1007/978-3-642-84524-6

    Book  MATH  Google Scholar 

  14. Jacobsen, L., Jacobsen, M., MĆøller, M.H., Srba, J.: Verification of timed-arc Petri nets. In: ČernĆ”, I., et al. (eds.) SOFSEM 2011. LNCS, vol. 6543, pp. 46ā€“72. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-18381-2_4

    Chapter  MATH  Google Scholar 

  15. CPN Tools. https://cpntools.org/

  16. Silver, D., et al.: Mastering the game of Go without human knowledge. Nature 550(7676), 354ā€“359 (2017)

    Article  Google Scholar 

  17. Kalashnikov, D., et al.: QT-Opt: scalable deep reinforcement learning for vision-based robotic manipulation, November 2018

    Google Scholar 

  18. Nian, R., Liu, J., Huang, B.: A review on reinforcement learning: introduction and applications in industrial process control. Comput. Chem. Eng. 139, 106886 (2020)

    Article  Google Scholar 

  19. Yu, C., Liu, J., Nemati, S.: Reinforcement learning in healthcare: a survey, April 2020. arXiv:1908.08796 [cs]

  20. Pentico, D.W.: Assignment problems: a golden anniversary survey. Eur. J. Oper. Res. 176(2), 774ā€“793 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  21. Gymnasium. https://gymnasium.farama.org/

  22. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal Policy Optimization Algorithms, August 2017. arXiv:1707.06347 [cs]

  23. SB3-contr. https://github.com/Stable-Baselines-Team/stable-baselines3-contrib

Download references

Acknowledgement

The research that led to this publication was partly funded by the European Supply Chain Forum (ESCF) and the Eindhoven Artificial Intelligence Systems Institute (EAISI) under the AI Planners of the Future program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Riccardo Lo Bianco .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lo Bianco, R., Dijkman, R., Nuijten, W., van Jaarsveld, W. (2023). Action-Evolution Petri Nets: A Framework for Modeling and Solving Dynamic Task Assignment Problems. In: Di Francescomarino, C., Burattin, A., Janiesch, C., Sadiq, S. (eds) Business Process Management. BPM 2023. Lecture Notes in Computer Science, vol 14159. Springer, Cham. https://doi.org/10.1007/978-3-031-41620-0_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-41620-0_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-41619-4

  • Online ISBN: 978-3-031-41620-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics