Continuous OptimizationA stochastic quasi-Newton method for simulation response optimization
Introduction
Many decision-making problems involve the determination of some decision variables that generate the most favorable value for some performance criteria called responses. Two typical problem classes of this type discussed in the operations research literature are queueing and inventory problems. For example, the former searches for the best service rate for each service station and the latter seeks for the best ordering quantity and reorder point to minimize the operating costs. A common characteristic of this type of problems is the involvement of random conditions that are uncontrollable. For example, the arrival of the customers to be served either in the queueing problem or in the inventory problem is stochastic. This random phenomenon makes these problems different from deterministic optimization. Consider the stochastic optimization problems of the following form:where E[f(x, w)] is the expected response and f(x, w) is a performance measure evaluated at x, a vector of decision variables, and w, a vector of uncontrollable random conditions.
For relatively simple problems, analytical techniques (Ross, 2003) can be applied. As the problem gets more complicated, those analytical techniques become inapplicable. Instead, computer simulation has been demonstrated to be a powerful tool. In past years, several simulation methods based on different optimization ideas have been proposed. The details can be found in several review articles (for instance, Meketon, 1987; Jacobson and Schruben, 1989; Safizadeh, 1990; Azadivar, 1992; Fu, 1994). The methods include the stochastic approximation technique (L’Ecuyer et al., 1994; L’Ecuyer and Glynn, 1994; Andradottir, 1995, Andradottir, 1996), the Nelder–Mead simplex method (Barton and Ivey, 1996), the quasi-Newton method (Safizadeh and Signorile, 1994; Kao et al., 1997), and the sample path optimization (Plambeck et al., 1996; Robinson, 1996; Gurkan, 2000).
The basic idea of these methods is to find a direction of improvement, and move a distance from the current trial point along that direction. This process is repeated until no further movement can be made. Most search directions are related to the gradient of the function to be minimized. However, since f(x, w) is stochastic in nature, the exact gradient of F(x) is not obtainable (Glynn, 1986; Ringuest, 1988; Fu et al., 1995), which undermines the effectiveness of the gradient-based optimization methods. In deterministic optimization, the quasi-Newton method, which uses the first derivatives to approximate the second derivatives, has been demonstrated to be effective and efficient (Fletcher, 1987; Gill et al., 1981). Therefore, an intuitive idea is to modify the quasi-Newton method to suit the stochastic environment. Safizadeh and Signorile (1994) use a quasi-Newton method in the vicinity of the optimal point to speed up the convergence of the response surface methodology. Kao et al. (1997) devise a quasi-Newton method for the whole process of optimization in simulation. The results from two M/M/1 queueing problems indicate that the quasi-Newton method yields better solutions than the stochastic approximation (SA) algorithm (Andradottir, 1996) does. Hence, this method deserves further investigation. Particularly, the convergence property which has not been discussed in those two papers will be addressed. Moreover, since it is not quite meaningful to compare the average responses in the stochastic environment, the application of the statistical t-test instead of the simple comparison of averages will be discussed.
This paper is organized as follows. In Section 2, the stochastic quasi-Newton method proposed by Kao et al. (1997) is described. To be theoretically sound, the convergence property of the stochastic quasi-Newton method is then discussed. A systematic approach for determining the perturbation value used in estimating the subgradient by central differencing is discussed in Section 4. Finally, a four-station series queue and a two-variable inventory system are solved to demonstrate the validity of the devised method.
Section snippets
The stochastic quasi-Newton method
Consider an unconstrained optimization problem of the following form:
Many solution methods exist, among which the quasi-Newton method is probably the most effective and efficient one (Fletcher, 1987; Gill et al., 1981). Basically, this method uses the first derivatives to calculate a metric matrix Z to approximate the inverse of the Hessian matrix of f(x). Denote g(k) as the gradient of f(x) evaluated at x(k), and Z(0) as an initial positive definite matrix. Successive points
Convergence property
One important ingredient for a numerical method to be promising is strong theoretical basis. The quasi-Newton method with the BFGS updating formula has been proved to be globally convergent in deterministic environments. Whether the convergence property still holds in stochastic environments requires investigation. To start, the following three properties possessed by deterministic quasi-Newton algorithms are stated for later reference. Property 1 If Z(0) is a positive definite matrix and (Δx(k))TΔg(k) > 0
Parameter determination
Since the stochastic quasigradient is estimated using central differencing (Eq. (2)), it is inevitably noise-corrupted. If ε(k) is too small, then the stochastic disturbance may outweigh the real difference between R(x(k) + ε(k)ei) and R(x(k) − ε(k)ei). In other words, the variation of g(x(k), ε(k)) is relatively larger than its biasness, which will dominate the convergence rate. On the other hand, if ε(k) is too large, then the statistically estimated g(k) may not be able to describe the local
Empirical work
In general, examples to illustrate stochastic optimization are systems with known analytical solutions. In the literature, only a few examples have been used for investigating the effectiveness and efficiency of a stochastic optimization method. The two M/M/1 examples of Andradottir (1996), which are similar to that studied by Suri and Leung (1989), L’Ecuyer et al. (1994), and Andradottir (1995), have only one and two variables, respectively. In order to examine the performance of the method of
Conclusion
Simulation response optimization has a variety of applications in management. In this paper the conventional quasi-Newton method widely used in deterministic optimization is modified to suit stochastic environments. The basic idea is to use the average stochastic quasigradient calculated from different replications by central differencing to approximate the true subgradient. In both the line search process and the quasi-Newton iteration, convergence is determined by a t-test, rather than a
References (40)
- et al.
Comparison of gradient estimation techniques for queues with non-identical servers
Computers and Operations Research
(1995) - et al.
Techniques for simulation response optimization
Operations Research Letters
(1989) - et al.
A modified quasi-Newton method for optimization in simulation
International Transactions in Operational Research
(1997) - et al.
A convergence theorem for nonnegative almost supermartingales and some applications
A stochastic approximation algorithm with varying bounds
Operations Research
(1995)A scaled stochastic approximation algorithm
Management Science
(1996)- Azadivar, F., 1992. A tutorial on simulation optimization. In: Proceedings of the 1992 Winter Simulation Conference,...
- et al.
Nelder–Mead simplex modifications for simulation optimization
Management Science
(1996) - et al.
Nonlinear Programming: Theory and Algorithms
(1993) - et al.
A tool for the analysis of quasi-Newton methods with application to unconstrained minimization
SIAM Journal of Numerical Analysis
(1989)
Convergence of the BFGS method for LC1 convex constrained optimization
SIAM Journal of Control and Optimization
Stochastic quasigradient methods and their application to system optimization
Stochastics
Practical Methods of Optimization
Optimization via simulation: A review
Annals of Operations Research
Techniques for optimization via simulation: An experimental study on an (s, S) inventory system
IIE Transactions
Practical Optimization
Simulation optimization of buffer allocation in production lines with unreliable machines
Annals of Operations Research
Analysis of Inventory Systems
Applied Nonlinear Programming
Cited by (12)
Load sharing redundant repairable systems with switching and reboot delay
2020, Reliability Engineering and System SafetyCitation Excerpt :It is proved that the resulting stochastic Newton-quasi algorithm is able to generate a sequence that converges to the optimal point, under certain conditions. Many theorists used Newton-quasi method for the optimization problem in repairable machining systems (c.f. Kao and Chen [31], Wang et al. [32], Ke et al. [33]). Li et al. [34] extended spectral conjugate gradient method with Newton-quasi directions and equations for the unconstrained optimization problem.
Time value of delays in unreliable production systems with mixed uncertainties of fuzziness and randomness
2016, European Journal of Operational ResearchCitation Excerpt :Note that the Pi(n) values may not be derived analytically, causing the objective function to be complicated and intractable. In this case, Eq. (2) could be evaluated via simulation response optimization (Kao & Chen, 2006) to enable the application of the methodology developed in this paper. In this subsection, we apply our theoretical results to actual data from an automated car wash facility operating with only one bay to illustrate the validity of the proposed procedure when used to analyze a manufacturing or service system with an unreliable process.
A gradient method for unconstrained optimization in noisy environment
2013, Applied Numerical MathematicsContinuous optimization via simulation using Golden Region search
2011, European Journal of Operational ResearchCitation Excerpt :This method considers a probability distribution model for the location of the global optimum and tries to accumulate the density of this distribution around the global optimum by periodically updating the parameters of the distribution. Applications of Nonlinear Programming methods in SO have also been investigated in the literature (Kao and Chen, 2006; Bettonvila et al., 2009). Gradient Search methods such as Stochastic Approximation (Robins and Monro, 1951 and Kiefer and Wolfoxitz, 1952) estimate the gradient of the objective function (Fu, 2006) and then use gradient methods from mathematical programming.
Statistical testing of optimality conditions in multiresponse simulation-based optimization
2009, European Journal of Operational ResearchTowards explicit superlinear convergence rate for SR1
2023, Mathematical Programming