Optimal service rates for the state-dependent M/G/1 queues in steady state

https://doi.org/10.1016/S0167-6377(01)00084-0Get rights and content

Abstract

The problem addressed in this paper is how to choose the optimal service rate for a one-server queue with state-dependent Poisson arrivals, given the service rate can be adjusted continuously. In particular, we show how to calculate the derivatives of the equilibrium probabilities in imbedded Markov chains, how to convert departure-time probabilities into random time probabilities by using Bayes’ theorem, and how to find the transition matrix for the case where arrivals depend on the number in the system.

Introduction

Faster servers are typically more expensive, but they may reduce waiting and/or increase throughput. There are different reasons why faster servers are more expensive. If the server consists of a team of workers, as in the case in construction crews, more workers mean that projects can be completed in a shorter time. The same obviously holds if a company has an in-house programming group. Again, the more programmers you have, the faster programming projects can be completed. In case of computers, one can use more memory and/or more processors to increase the speed of the computer. As pointed out by Lamond [9] and Taylor [11], letting machines run faster decreases the life-time of the machines, which in turn increases the cost. Of course, the additional costs are only worthwhile if they are offset by decreases in wait and/or increases in throughput.

In this paper, we determine the best service rate in the context of the M/G/1 queue with state dependent arrival rates. We note that at present, there is not much literature on the M/G/1 queue with state dependent arrival rates [2], [7], [8]. In particular, we did not find any discussion of the relationship between departure-time probabilities and random time probabilities for such systems in the open literature. However, in her thesis, Chen [1] solved this problem using Markov renewal processes as described in [8, pp. 288–293]. This approach is very cumbersome, and we decided to use a different approach, an approach which we think has never been used before to relate random time probabilities to departure time probabilities in M/G/1 queues, even though similar methods have been used earlier in Erlang queues [3]. The approach uses Bayes’ theorem, and this method should also be applicable in other contexts, such as the GI/M/1 queue and even queueing networks, where it has the potential to simplify the analysis considerably.

We address the problem as a design rather than as a control problem. According to our sources (see [8, pp. 300–308, 7,6] for references), the literature on the optimal design of queueing systems is rather sparse. In view that most literature on operations research deals with optimization, this is rather unexpected, and we believe more research in optimal design would be useful.

The revenue function we consider is as follows:f(μ)=rU(μ)−wL(μ)−c(μ).Here, μ is the service rate, and c(μ) is the cost for the server, which is assumed to increase with μ. U(μ) is the throughput, and r is the revenue for each customer served. L(μ) is the number in the system, and w is the cost for waiting. Note, however, that cost functions other than (1) can also be handled by the methods presented here.

The outline of the paper is as follows. We first discuss how to calculate the derivatives of the steady-state probabilities in Markov chains. We then derive formulas for L(μ) and U(μ) for the M/G/1 queue, using the probabilities after departures. These probabilities are used to find the random time probabilities by Bayes’ theorem. Following that, we show how to find the transition matrix of the imbedded Markov chain, and how to calculate the required derivatives in the case of the M/G/1 queue.

Section snippets

Derivatives for Markov chains

This section indicates how to find derivatives in Markov chains. It is a further development of ideas presented in [7], [6].

Suppose there is a discrete-time Markov chain with the N+1 states 0,1,2,…,N, and the transition matrix P. If e represents the column vector with all its elements equal to 1, then the equilibrium vector π is given by0=π(P−I),πe=1.It is assumed that P depends on some parameter μ, in our context the service rate. We would like to maximizef(μ)=g(π(μ)).To do this, we use

Departure time and random time measures

We now show how to we use Bayes’ theorem to link departure time and random time probabilities. We use λi to denote the rate at which customers join the system which, in the case of balking, is different from the arrival rate λ. To insure that there are steady-state probabilities, we assume that limi→∞λi. For our discussion, we need the throughput U=U(μ), which becomes, if Pi is the probability to have i customers in the system at a random time:U=i=0λiPi.This relation is easily derived by

Transition matrix of the imbedded chain

For N=1,π0=1,πi=0,i≠0, and no transition matrix is needed. Hence, we assume N>1 and definepij=P{jafterdep.n+1|iafterdep.n}.From the theory of the M/G/1 queue, we know that p0j=p1j. For 1⩽i<N, we definepij=P{jbeforedep.n+1|iafterdep.n},pij(t)=P{jatt|iat0}.Using the standard argument, one now has, if FS(t) is the service time distribution:pij=pij+1=0pij+1(t)dFS(t).If the system has a buffer size of N, then the number in the system after a departure must be less than N. Hence, P becomesP=p11

An example

In this section, we consider the following problem: there is a data switch with constant service time 1/μ, where μ has to be determined. The cost of the switch is per time unit. The switch is used by 40 people, each using it at a rate of 1/40. Hence, λi=(40−i)/40. Obviously, the system constitutes a finite population queue with a population of N=40.

As it so happens, the derivation for finite population queues simplifies considerably. First, it is easy to verify that L=N(1−U/λ0), which in

Acknowledgements

It is gratefully acknowledged that this research was supported by the Natural Sciences and Engineering Research Council of Canada. We also thank the referee for his or her valuable suggestions.

References (11)

  • W.K. Grassmann

    Optimizing steady state Markov chains by state reduction

    EJOR

    (1996)
  • X. Chen, The use of derivatives for optimizing steady state queues, Master's Thesis, Department of Computer Science,...
  • R.S. Dick

    Some theorems on a single server queue with balking

    Oper. Res.

    (1970)
  • W.K. Grassmann

    The steady state behaviour of the M/Ek/1 queue, with state dependent arrival rates

    INFOR

    (1974)
  • W.K. Grassmann

    Stochastic Systems for Management

    (1981)
There are more references available in the full text version of this article.

Cited by (7)

View all citing articles on Scopus
View full text