Loading [MathJax]/extensions/MathMenu.js
Nash Q-learning multi-agent flow control for high-speed networks | IEEE Conference Publication | IEEE Xplore

Nash Q-learning multi-agent flow control for high-speed networks


Abstract:

For the congestion problems in high-speed networks, a multi-agent flow controller (MFC) based on Q-learning algorithm conjunction with the theory of Nash equilibrium is p...Show More

Abstract:

For the congestion problems in high-speed networks, a multi-agent flow controller (MFC) based on Q-learning algorithm conjunction with the theory of Nash equilibrium is proposed. Because of the uncertainties and highly time-varying, it is not easy to accurately obtain the complete information for high-speed networks, especially for the multi-bottleneck case. The Nash Q-learning algorithm, which is independent of mathematic model, shows the particular superiority in high-speed networks. It obtains the Nash Q-values through trial-and-error and interaction with the network environment to improve its behavior policy. By means of learning procedures, MFCs can learn to take the best actions to regulate source flow with the features of high throughput and low packet loss ratio. Simulation results show that the proposed method can promote the performance of the networks and avoid the occurrence of congestion effectively.
Date of Conference: 10-12 June 2009
Date Added to IEEE Xplore: 10 July 2009
ISBN Information:

ISSN Information:

Conference Location: St. Louis, MO, USA

References

References is not available for this document.