Abstract
Distributed randomized algorithms, when they operate under a memoryless scheduler, behave as finite Markov chains: the probability at n-th step to go from a configuration x to another one y is a constant p that depends on x and y only. By Markov theory, we thus know that, no matter where the algorithm starts, the probability for the algorithm to be after n steps in a “recurrent” configuration tends to 1 as n tends to infinity. In terms of self-stabilization theory, this means that the set Rec of recurrent configurations is included into the set L of “legitimate” configurations. However in the literature, the convergence of self-stabilizing randomized algorithms is always proved in an elementary way, without explicitly resorting to results of Markov theory. This yields proofs longer and sometimes less formal than they could be. One of our goals in this paper is to explain convergence results of randomized distributed algorithms in terms of Markov chains theory. Our method relies on the existence of a non-increasing measure ε over the configurations of the distributed system. Classically, this measure counts the number of tokens of configurations. It also exploits a function D that expresses some distance between tokens, for a fixed number k of tokens. Our first result is to exhibit a sufficient condition Prop on ε and D which guarantees that, for memoryless schedulers, every recurrent configuration is legitimate. We extend this property Prop in order to handle arbitrary schedulers although they may induce non Markov chain behaviours.We then explain how Markov’s notion of “lumping” naturally applies to measure D, and allows us to analyze the expected time of convergence of self-stabilizing algorithms. The method is illustrated on several examples of mutual exclusion algorithms (Herman, Israeli-Jalfon, Kakugawa-Yamashita).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
J. Beauquier, S. Cordier, and S. Delaёt. Optimum probabilistic self-stabilization on uniform ring. In Proc. of the 2nd Worshop on Self-Stabilizing Systems, 1995.
J. Beauquier and S. Delaёt. Probabilistic self-stabilizing mutual exclusion in uniform ring. In Proc. of the 13th Annual ACM Symposium on Principles of Distributed Computing (PODC’94), page 378, 1994.
J. Beauquier, J. Durand-Lose, M. Gradinariu, and C. Johnen. Token based selfstabilizing uniform algorithms. J. of Parallel and Distributed Systems, To appear.
J. Beauquier, M. Gradinariu, and C. Johnen. Memory space requirements for selfstabilizing leader election protocols. In Proc. of the 18th Annual ACM Symposium on Principles of Distributed Computing (PODC’99), pages 199–208, 1999.
E.W. Dijkstra. Self-stabilizing systems in spite of distributed control. Communications of the ACM, 17(11):643–644, Nov. 1974.
S. Dolev. Self-Stabilization. MIT Press, 2000.
S. Dolev, A. Israeli, and S. Moran. Analyzing expected time by scheduler-luck games. IEEE Transactions on Softare Engineering, 21(5):429–439, May 1995.
M. Duflot, L. Fribourg, and C. Picaronny. Randomized distributed algorithms as markov chains. Technical report, Lab. Specification and Verification, ENS de Cachan, Cachan, France, May 2001. Available on http://www.lsv.ens-cachan.fr/Publis/RAPPORTS_LSV/.
M. Flatebo and A.K. Datta. Two-state self-stabilizing algorithms for token rings. IEEE Transactions on Software Engineering, 20(6):500–504, June 1994.
T. Herman. Probabilistic self-stabilization. IPL, 35(2):63–67, June 1990.
A. Israeli and M. Jalfon. Token management schemes and random walks yield self-stabilizing mutual exclusion. In Proc. of the 9th Annual ACM Symposium on Principles of Distributed Computing (PODC’90), pages 119–131, 1990.
A. Israeli and M. Jalfon. Uniform self-stabilizing ring orientation. Information and Computation, 104(2):175–196, 1993.
H. Kakugawa and M. Yamashita. Uniform and self-stabilizing token rings allowing unfair daemon. IEEE Trans. Parallel and Distributed Systems, 8(2), 1997.
J.G. Kemeny and J.L. Snell. Finite Markov Chains. D. van Nostrand Co., 1969.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Duflot, M., Fribourg, L., Picaronny, C. (2001). Randomized Finite-state Distributed Algorithms As Markov Chains. In: Welch, J. (eds) Distributed Computing. DISC 2001. Lecture Notes in Computer Science, vol 2180. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45414-4_17
Download citation
DOI: https://doi.org/10.1007/3-540-45414-4_17
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42605-9
Online ISBN: 978-3-540-45414-4
eBook Packages: Springer Book Archive