Elsevier

Handbook of Statistics

Volume 9, 1993, Pages 169-200
Handbook of Statistics

6 Algorithms and complexity for markov processes

https://doi.org/10.1016/S0169-7161(05)80130-0Get rights and content

Publisher Summary

This chapter presents sequential and parallel algorithms for many common problems that are known in the area of discrete time finite state Markov chains and Markov decision processes. It analyzes the complexity of these algorithms and, in some cases, demonstrates the intractability of the problems to appreciate the computational difficulty involved in solving these problems. In deriving efficient algorithms for these problems, the graph and matrix representations of Markov processes are used. Both these representations prove to be useful in drawing upon the algorithmic results in the areas of graphs and matrices. Although optimal sequential algorithms for some problems are presented in the chapter, there are many problems especially in Markov decision processes (MDP) for which more efficient algorithms can be developed. In the case of parallel algorithms, many Markov chain problems can be efficiently parallelized, but the speed ups for many of the algorithms are not optimal. Though MDP problems have been shown to be P-complete, there is still a need to parallelize these algorithms as efficiently as possible especially on architectures such as hypercubes. A similar complexity survey is needed for other problems in stochastic processes, such as stochastic games.

References (53)

  • CookS.A.

    A taxonomy of problems with fast parallel algorithms

    Inform. and Control

    (1985)
  • AhoA.V. et al.

    The Design and Analysis of Computer Algorithms

    (1974)
  • AtallahM.J.

    Finding the cyclic index of an irreducible, nonnegative matrix

    SIAM J. Comput.

    (1982)
  • AtallahM.J. et al.

    Graph problems on a mesh-connected processor array

    J. ACM

    (1984)
  • BellmanR.

    Dynamic Programming

    (1957)
  • BertsekasD.P. et al.

    Dynamic aggregation methods for discounted Markovian decision problems

  • BertsekasD.P. et al.

    Parallel and Distributed Computation — Numerical Methods

    (1989)
  • BlackwellD.

    Discounted dynamic programming

    Ann. Math. Statist.

    (1965)
  • ChandraA.K.

    Maximal parallelism in matrix multiplication

  • CookS.A.

    The complexity of theorem-proving procedures

  • CoppersmithD. et al.

    Matrix multiplication via arithmetic progressions

  • CsankyL.

    Fast parallel matrix inversion algorithms

    SIAM J. Comput.

    (1976)
  • DekelE. et al.

    Parallel matrix and graph algorithms

    SIAM J. Comput.

    (1981)
  • DenardoE.V. et al.

    Multichain Markov renewal programs

    SIAM J. Appl. Math.

    (1968)
  • D'EpenouxF.

    Sur un problème de production et de stockage dans l'aléatoire

    Rev. Francaise Informat. Rech. Opér.

    (1960)

    Management Sci.

    (1963)
  • DermanC.

    On sequential decisions and Markov chains

    Management Sci.

    (1962)
  • DermanC.

    Finite state Markovian Decision Processes

    (1970)
  • DijkstraE.W.

    A note on two problems in connexion with graphs

    Numer. Math.

    (1959)
  • EvenS.

    Graph Algorithms

    (1979)
  • FellerW.
  • FloydR.W.

    Algorithm 97: Shortest path

    Comm. ACM

    (1962)
  • FoxB.L. et al.

    An algorithm for identifying the ergodic subchains and transient states of a stochastic matrix

    Comm. ACM

    (1968)
  • GareyM.R. et al.

    Computers and Intractability — A Guide to the Theory of NP-Completeness

    (1979)
  • GoldschlagerL.M.

    A unified approach to models of synchronous parallel machines

  • HordijkA. et al.

    Linear programming and Markov decision chains

    Management Sci.

    (1979)
  • HordijkA. et al.

    Constrained undiscounted stochastic dynamic programming

    J. Math. Oper. Res.

    (1984)
  • Cited by (0)

    View full text