Skip to main content
Log in

Markov Teams — An analytical approach to process migration in distributed computing systems

  • Published:
Zeitschrift für Operations Research Aims and scope Submit manuscript

Abstract

A process migration mechanism offers a means to exploit the performance reserves present in networks of workstations used as personal computers by allowing the processors to migrate processes from overload ones to underused ones. Several distributed operating systems provide such a facility, the benefits of its use depending on the specification of a proper process migration policy. This work proposes an analytical model, the Markov Team Model, to assist the design of such a policy. Besides deriving this model from results of classical Team Theory and Markov Decision Processes, we study the special case of homogeneous distributed computing systems and present methods for parameter estimation. Numerical examples are used to demonstrate the benefits of using this model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Anderson TW, Goodman LA (1957) Statistical Inference About Markov Chains. Ann Math Stat 28:90–110

    Google Scholar 

  2. Artsy Y, Finkel R (1989) Designing a Process Migration Facility — The Charlotte Experience. IEEE Computer, September 1989, pp 47–56

  3. Barak A, Shiloh A (1985) A Distributed Load-balancing Policy for a Multicomputer, Software — Practice and Experience 15(9):901–913

    Google Scholar 

  4. Cheriton DR (1988) TheV Distributed System, Communications of the Association for the Computing Machinery 31(3):314–333

    Google Scholar 

  5. Dikshit P, Satih KT, Jalote P (1989) SAHAYOG: A Test Bed for Evaluating Dynamic Load-sharing Policies. Software — Practice and Experience 18(5):411–435

    Google Scholar 

  6. Douglis F, Ousterhout J (1991) Transparent Process Migration: Design Alternatives and the Sprite Implementation. Software — Practice and Experience 21(8):757–785

    Google Scholar 

  7. Eager DL, Lazowska ED, Zahorjan J (1986) Adaptive Load Sharing in Homogeneous Distributed Systems. IEEE Transactions on Software Engineering 12(5):662–675

    Google Scholar 

  8. Finkel RA, Scott ML, Artsy Y, Chang H (1989) Experience with Charlotte: Simplicity and Function in a Distributed Operating System. IEEE Transactions on Software Engineering 15(6):676–685

    Google Scholar 

  9. Heyman DP, Sobel MJ (1984) Stochastic Models in Operations Research, Vol II, Stochastic Optimization. McGraw Hill, New York

    Google Scholar 

  10. Isaacson DL, Madsen RW (1976) Markov Chains —Theory and Applications. Wiley, New York

    Google Scholar 

  11. Lovejoy WS (1986) Policy Bounds for Markov Decision Processes. Operat Res 34(4):630–637

    Google Scholar 

  12. Lovejoy WS (1987) Ordered Solutions for Dynamic Programs. Math Oper Res 12(2):269–276

    Google Scholar 

  13. Marschak J, Radner R (1972) Economic Theory of Teams. Yale University Press, New Haven, Conn

    Google Scholar 

  14. Matulka J (1989) TCP/UDP Performance as Experienced by User-Level Processes, Proceedings of the European UNIX Users' Conference, Vienna, pp 179–188

  15. Sandell NR, Varaiya P, Athans M, Safonov MG (1978) Survey of Decentralized Control Methods for Large Scale Systems. IEEE Transactions on Automation Control 23(2):108–120

    Google Scholar 

  16. Stankovich JA (1984) A Perspective on Distributed Computer Systems. IEEE Transactions on Computers, pp 1102–1115

  17. Tanenbaum AS, van Renesse R (1985) Distributed Operating Systems. Computing Surveys 17(4):419–470

    Google Scholar 

  18. Taudes A (1991) Ressourcenmanagement in verteilten Rechnersystemen. ADV-Verlag, Vienna

    Google Scholar 

  19. Tenney RR, Sandell NR (1981) Structures for Distributed Decisionmaking. IEEE Transactions on Systems, Man and Cybernetics 11(8):517–526

    Google Scholar 

  20. Theimer MM, Lantz KA (1989) Finding Idle Machines in a Workstation-Based Distributed System. IEEE Transactions on Software Engineering 15(11):1444–1458

    Google Scholar 

  21. White CC III, Schlussel K (1981) Suboptimal Design for Large Scale, Multimodule Systems. Operat Res 29(5):865–875

    Google Scholar 

  22. Whitt W (1978) Approximations of Dynamic Programs, I. Math Operat Res 3(3):231–243

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Taudes, A. Markov Teams — An analytical approach to process migration in distributed computing systems. ZOR Zeitschrift für Operations Research Methods and Models of Operations Research 36, 379–403 (1992). https://doi.org/10.1007/BF01416237

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01416237

Keywords