Skip to main content
Log in

A tutorial on event-based optimization—a new optimization framework

  • Published:
Discrete Event Dynamic Systems Aims and scope Submit manuscript

Abstract

In many practical systems, the control or decision making is triggered by certain events. The performance optimization of such systems is generally different from the traditional optimization approaches, such as Markov decision processes or dynamic programming. The goal of this tutorial is to introduce, in an intuitive manner, a new optimization framework called event-based optimization. This framework has a wide applicability to aforementioned systems. With performance potential as building blocks, we develop two intuitive optimization algorithms to solve the event-based optimization problem. The optimization algorithms are proposed based on an intuitive principle, and theoretical justifications are given with a performance sensitivity based approach. Finally, we provide a few practical examples to demonstrate the effectiveness of the event-based optimization framework. We hope this framework may provide a new perspective to the optimization of the performance of event-triggered dynamic systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. If policy d is a random policy that picks each action with positive probability (i.e., d(i) is a probability distribution over \(\mbox{$\cal A$}\) that specifies the probabilities of the actions adopted when state i is visited), then a sample path under policy d visits all the state-action pairs (i, a), \(i \in \mbox{$\cal S$}\), \(a \in \mbox{$\cal A$}\). Therefore, \(Q^d_N(i,a)\) for all the state-action pairs (i, a), \(i \in \mbox{$\cal S$}\), \(a \in \mbox{$\cal A$}\), can be estimated without using the randomizing techniques.

References

  • Altman E (1999) Constrained Markov decision processes. Chapman & Hall/CRC

  • Altman E, Avrachenkov KE, Nún̋ez-Queija R (2004) Perturbation analysis for denumerable Markov chains with application to queueing models. Adv. Appl. Probab. 36:839–853

    Article  MATH  Google Scholar 

  • Bertsekas DP, Tsitsiklis JN (1996) Neuro-dynamic programming. Athena, Belmont, MA

  • Barto A, Mahadevan S (2003) Recent advances in hierarchical reinforcement learning. In: Special issue on reinforcement learning. discrete event dynamic systems: theory and applications 13:41–77

  • Cao XR (1994) Realization probabilities—the dynamics of queueing systems. Springer, New York

    Google Scholar 

  • Cao XR (2005) Basic ideas for event-based optimization of Markov systems. Discrete Event Dynamic Systems: Theory and Applications 15:169–197

    Article  MATH  MathSciNet  Google Scholar 

  • Cao XR (2007) Stochastic learning and optimization—a sensitivity-based approach. Springer, New York

    Book  MATH  Google Scholar 

  • Cao XR, Chen HF (1997) Potentials, perturbation realization, and sensitivity analysis of Markov processes. IEEE Trans Autom Control 42:1382–1393

    Article  MATH  MathSciNet  Google Scholar 

  • Cao XR, Zhang JY (2008) Event-based optimization of Markov systems. IEEE Trans Autom Control 53:1076–1082

    Article  MathSciNet  Google Scholar 

  • Cao XR, Ren ZY, Bhatnagar S, Fu M, Marcus S (2002) A time aggregation approach to Markov decision processes. Automatica 38:929–943

    Article  MATH  MathSciNet  Google Scholar 

  • Ho YC, Cao XR (1991) Perturbation analysis of discrete event systems. Kluwer Academic Publishers, Norwell

    Book  MATH  Google Scholar 

  • Jia QS (2011) On solving event-based optimization with average reward over infinite stages. IEEE Trans Autom Control 56:2912–2917

    Article  Google Scholar 

  • Leahu H, Heidergott B, Hordijk A (2013) Perturbation analysis of waiting times in the G/G/1 queue. Discrete Event Dyn Syst Theory Appl 23:277–305

    Article  MATH  MathSciNet  Google Scholar 

  • Marbach P, Tsitsiklis JN (2001) Simulation-based optimization of Markov reward processes. IEEE Trans Autom Control 46:191–209

    Article  MATH  MathSciNet  Google Scholar 

  • Parr RE (1998) Hierarchical control and learning for Markov decision processes. Ph.D. Dissertation, University of California at Berkeley

  • Puterman ML (1994) Markov decision processes: discrete stochastic dynamic programming. Wiley

  • Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT Press, Cambridge, MA

    Google Scholar 

  • Wang DX, Cao XR (2011) Event-based optimization for POMDP and its application in portfolio management. In: The Proceedings of the 18th IFAC World Congress, 28 Aug–2 Sept 2011. Milano, Italy, pp 3228–3233

  • Whitt W (1978) Approximation of dynamic programs, I. Math Oper Res 3:231–243

    Article  MATH  MathSciNet  Google Scholar 

  • Whitt W (1979) Approximation of dynamic programs, II. Math Oper Res 4:179–185

    Article  MATH  MathSciNet  Google Scholar 

  • Xia L (2012) Optimal control of customer admission to an open Jackson network. In: The Proceedings of the 31st Chinese Control Conference, 25–27 July 2013. Hefei, China, pp 2362–2367

  • Xia L (2013) Event-based optimization of admission control in open queueing networks. Discrete Event Dyn Syst Theory Appl. doi:10.1007/s10626-013-0167-1

    Google Scholar 

  • Xia L, Cao XR (2012) Performance optimization of queueing systems with perturbation realization. Eur J Oper Res 218:293–304

    Article  MATH  MathSciNet  Google Scholar 

  • Yao C, Cassandras CG (2012) Using infinitesimal perturbation analysis of stochastic flow models to recover performance sensitivity estimates of discrete event systems. Discrete Event Dyn Syst Theory Appl 22:197–219

    Article  MATH  MathSciNet  Google Scholar 

  • Zhao Y, Yan CB, Zhao Q, Huang N, Li J, Guan X (2010a) Efficient simulation method for general assembly systems with material handling based on aggregated event-scheduling. IEEE Trans Autom Sci Eng 7(4):762–775

    Article  Google Scholar 

  • Zhao Y, Zhao Q, Guan X (2010b) Stochastic optimal control for a class of manufacturing systems based on event-based optimization. In: Huang J, Liu K-Z, Ohta Y (eds) Theory and applications of complex systems and robust control. Tsinghua University Press, Beijing, China, pp 63–83

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Xia.

Additional information

This work was supported in part by the 111 International Collaboration Project (B06002), National Natural Science Foundation of China (60704008, 60736027, 61174072, 61203039, 61221003, 61222302, 90924001), Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-discipline Foundation, the Specialized Research Fund for the Doctoral Program of Higher Education (20070003110, 20120002120009). This work was also supported in part by the Collaborative Research Fund of the Research Grants Council, Hong Kong Special Administrative Region, China, under Grant No. HKUST11/CRF/10.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Xia, L., Jia, QS. & Cao, XR. A tutorial on event-based optimization—a new optimization framework. Discrete Event Dyn Syst 24, 103–132 (2014). https://doi.org/10.1007/s10626-013-0170-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10626-013-0170-6

Keywords

Navigation