Skip to main content
Log in

An adaptive backoff selection scheme based on Q-learning for CSMA/CA

  • OriginalPaper
  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

In dynamic wireless networks, nodes move in large-scale spaces with different communications scenarios, including network traffic and unpredicted link state change. However, optimizing multi-user access mechanisms in multiple scenarios to maximize aggregate throughput still remains a practically essential and challenging issue. An efficient method to predict channel conditions and adapt to different communication environments for better performance in real-time is necessary. In this paper, we propose a novel Q-learning based MAC protocol using an intelligent backoff selection scheme to adaptively make decisions by evaluating rewards and variable learning parameters. Furthermore, an efficient channel observation scheme is proposed to optimize real-time decision-making more accurately with better assessment of channel states in different communication environments. Two typical wireless networks, i.e., wireless local area networks with dense users as infrastructure networks and mobile ad hoc networks with changing topologies as infrastructureless networks, are taken into account in simulations to show that the proposed protocol achieves significant performance improvement in terms of both aggregate throughput and packet loss rate with strong environmental adaptability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

References

  1. Ahmadi, H. & Bouallegue, R. (2017). Exploiting machine learning strategies and rssi for localization in wireless sensor networks: A survey. In 13th International Wireless Communications and Mobile Computing Conference (IWCMC) (pp. 1150–1154).

  2. Jiang, S. (2018). State-of-the-art medium access control (mac) protocols for underwater acoustic networks: A survey based on a mac reference model. IEEE Communications Surveys and Tutorials, 20(1), 96–131.

    Article  Google Scholar 

  3. Khamukhin, A. A. & Bertoldo, S. (2016). Spectral analysis of forest fire noise for early detection using wireless sensor networks. In International Siberian Conference on Control and Communications (SIBCON) (pp. 1–4).

  4. Bao, S. & Fujii, T. (2011). Q-learning based p-pesistent csma mac protcol for secondary user of cognitive radio networks. In Third International Conference on Intelligent Networking and Collaborative Systems (pp. 336–337).

  5. Sutton, Richard, & Barto, Andrew. (1998). Reinforcement learning: An introduction, adaptive computation and machine learning series.

  6. Nguyen, T. T., & Scsma, HOh. (2020). A smart csma/ca using blind learning for wireless sensor networks. IEEE Transactions on Industrial Electronics, 67(12), 10981–10988.

    Article  Google Scholar 

  7. Nisioti, E., & Thomos, N. (2020). Fast q-learning for improved finite length performance of irregular repetition slotted aloha. IEEE Transactions on Cognitive Communications and Networking, 6(2), 844–857.

    Article  Google Scholar 

  8. Liu, Y., Liu, H., Fang, H. et al. (2019). Enhancement objective q-learning mac for emergency transmission. In 6th International Symposium on Electromagnetic Compatibility (ISEMC), (pp. 1–5).

  9. Mastronarde, N., Modares, J., Wu, C. & Chakareski, J. (2016). Reinforcement learning for energy-efficient delay-sensitive csma/ca scheduling. In IEEE Global Communications Conference (GLOBECOM) (pp. 1–7).

  10. Kakalou, I., Papadimitriou, G. I., Nicopolitidis, P., Sarigiannidis, P. G. & Obaidat, M. S. (2015). A reinforcement learning-based cognitive mac protocol. In IEEE International Conference on Communications (ICC) (pp. 5608–5613).

  11. He, C., Wang, Q., Xu, Y., Liu, J. & Xu, Y. (2019). A q-learning based cross-layer transmission protocol for manets. In 2019 IEEE International Conferences on Ubiquitous Computing Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS) (pp. 580–585).

  12. Lu, Jin, & Huang, Defeng David. (2013). A slotted csma based reinforcement learning approach for extending the lifetime of underwater acoustic wireless sensor networks. Computer Communications, 36(9), 1094–1099.

    Article  Google Scholar 

  13. Duan, Jingshan, Zhuang, Yiqun, & Ma, Lixiang. (2012). An adaptive rts/cts mechanism in ieee 802.15.4 for multi-hop networks. In 2012 International Conference on Computational Problem-Solving (ICCP) (pp. 155–159).

  14. Kou, K., Lei, L., Zhang, L., Cai, S. & Shen, G. (2019). Intelligent selection: A neural network-based mac protocol-selection mechanism for wireless ad hoc networks. In IEEE 19th International Conference on Communication Technology (ICCT) (pp. 424–429).

  15. Cho, Soohyun. (2021). Reinforcement learning for rate adaptation in csma/ca wireless networks. In Advances in Computer Science and Ubiquitous Computing (pp. 175–181). Springer.

  16. Cho, Soohyun. (2020). Rate adaptation with q-learning in csma/ca wireless networks. Journal of Information processing systems, 16(5), 1048–1063.

    Google Scholar 

  17. Lee, Dong-jin, Deng, Yafeng, & Choi, Young-June. (2020). Back-off improvement by using q-learning in ieee 802.11p vehicular network. In 2020 International Conference on Information and Communication Technology Convergence (ICTC) (pp. 1819–1821).

  18. Chen, Yen-Wen & Kao, Kuo-Che. (2021). Study of contention window adjustment for csma/ca by using machine learning. In 2021 22nd Asia-Pacific Network Operations and Management Symposium (APNOMS) (pp. 206–209).

  19. Kim, Tae-Wook., & Hwang, Gyung-Ho. (2021). Performance enhancement of csma/ca mac protocol based on reinforcement learning. Journal of information and communication convergence engineering, 19(1), 1–7.

    Article  Google Scholar 

  20. Ali, Rashid, Shahin, Nurullah, Zikria, Yousaf Bin, Kim, Byung-Seo., & Kim, Sung Won. (2018). Deep reinforcement learning paradigm for performance optimization of channel observation-based mac protocols in dense wlans. IEEE Access, 7, 3500–3511.

    Article  Google Scholar 

  21. Bayat-Yeganeh, Hossein, Shah-Mansouri, Vahid, & Kebriaei, Hamed. (2018). A multi-state q-learning based csma mac protocol for wireless networks. Wireless Networks, 24(4), 1251–1264.

    Article  Google Scholar 

  22. Yu, Y., Wang, T., & Liew, S. C. (2019). Deep-reinforcement learning multiple access for heterogeneous wireless networks. IEEE Journal on Selected Areas in Communications, 37(6), 1277–1290.

    Article  Google Scholar 

  23. Moon, Sangwoo, Ahn, Sumyeong, Son, Kyunghwan, Park, Jinwoo, & Yi, Yung. (2021). Neuro-dcf: Design of wireless mac via multi-agent reinforcement learning approach. In Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing (pp. 141–150).

  24. Zhang, Lyutianyang, Yin, Hao, Zhou, Zhanke. (2020). Sumit Roy and Yaping Sun. Enhancing wifi multiple access performance with federated deep reinforcement learning. In 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall) (pp. 1–6).

  25. Zhao, Yizhe, Jie, Hu., Yang, Kun, & Cui, Shuguang. (2020). Deep reinforcement learning aided intelligent access control in energy harvesting based wlan. IEEE Transactions on Vehicular Technology, 69(11), 14078–14082.

    Article  Google Scholar 

  26. Barbosa, Paulo F. C., da Silva, Bruna A. (2020). Cleber Zanchettin and Renato M. de Moraes. Energy consumption optimization for csma/ca protocol employing machine learning. In 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring) (pp. 1–5).

  27. De Rango, Floriano, Cordeschi, Nicola & Ritacco, Francesco. (2021). Applying q-learning approach to csma scheme to dynamically tune the contention probability. In 2021 IEEE 18th Annual Consumer Communications Networking Conference (CCNC) (pp. 1–4).

  28. Ieee standard for information technology–telecommunications and information exchange between systems local and metropolitan area networks–specific requirements - part 11: Wireless lan medium access control (mac) and physical layer (phy) specifications. IEEE Std 802.11-2016 (Revision of IEEE Std 802.11-2012), 1–3534, 2016.

  29. Jin, Chi, Allen-Zhu, Zeyuan, Bubeck, Sebastien & Jordan, Michael I. (2018). Is q-learning provably efficient?. In S. Bengio, H. Wallach, H. Larochelle et al., (Eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., [Online]. Available: https://proceedings.neurips.cc/paper/2018/file/d3b1fb02964aa64e257f9f26a31f72cf-Paper.pdf.

  30. Gawłowicz, Piotr, & Zubow (2019). Anatolij ns-3 meets OpenAI Gym: The Playground for Machine Learning in Networking Research. In ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM). [Online]. Available: http://www.tkn.tu-berlin.de/fileadmin/fg112/Papers/2019/gawlowicz19_mswim.pdf.

Download references

Acknowledgements

This work is supported jointly by the Innovation Program of Shanghai Municipal Education Commission of China (No. 2021-01-07-00-10-E00121).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhichao Zheng.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, Z., Jiang, S., Feng, R. et al. An adaptive backoff selection scheme based on Q-learning for CSMA/CA. Wireless Netw 29, 1899–1909 (2023). https://doi.org/10.1007/s11276-023-03257-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-023-03257-0

Keywords

Navigation