Skip to main content

A Transmission Design via Reinforcement Learning for Delay-Aware V2V Communications

  • Conference paper
  • First Online:
Communications and Networking (ChinaCom 2020)

Abstract

We investigate machine-learning-based cross-layer energy-efficient transmission design for vehicular communication systems. A typical vehicle-to-vehicle (V2V) communication scenario is considered, in which the source intends to deliver two types of messages to the destination to support different safety-related applications. The first are periodically-generated heartbeat messages, and should be transmitted immediately with sufficient reliability. The second type are randomly-appeared sensing messages, and are expected to be transmitted with limited latency. Due to node mobility, accurate instantaneous channel knowledge at the transmitter side is hard to attain in practice. The transmit channel state information (CSIT) often exhibits certain delay. We propose a transmission strategy based on the deep reinforcement learning technique such that the unknown channel variation dynamics can be learned and transmission power and rate can be adaptive chosen according to the message delay status to achieve high energy efficiency. The advantages of our method over several conventional and heuristic approaches are demonstrated through computer simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cao, B., Zhang, L., Li, Y., Feng, D., Cao, W.: Intelligent offloading in multi-access edge computing: a state-of-the-art review and framework. IEEE Commun. Mag. 57(3), 56–62 (2019)

    Article  Google Scholar 

  2. Jiang, C., Zhang, H., Ren, Y., Han, Z., Chen, K., Hanzo, L.: Machine learning paradigms for next-generation wireless networks. IEEE Wirel. Commun. 24(2), 98–105 (2016)

    Article  Google Scholar 

  3. Zhang, C., Patras, P., Haddadi, H.: Deep learning in mobile and wireless networking: a survey. IEEE Commun. Surv. Tutor. 21(3), 2224–2287 (2019)

    Article  Google Scholar 

  4. Lan, D., Wang, C., Wang, P., Liu, F., Min, G.: Transmission design for energy-efficient vehicular networks with multiple delay-limited applications. In: 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, USA, December 2019

    Google Scholar 

  5. Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA, February 2016

    Google Scholar 

  6. Seo, H., Lee, K., Yasukawa, S., Peng, Y., Sartori, P.: LTE evolution for vehicle-to-everything services. IEEE Commun. Mag. 54(6), 22–28 (2016)

    Article  Google Scholar 

  7. Dar, K., Bakhouya, M., Gaber, J., Wack, M., Pascal, L.: Wireless communication technologies for ITS applications. IEEE Commun. Mag. 48(5), 156–162 (2010)

    Article  Google Scholar 

  8. Crites, R., Barto, A.: Improving elevator performance using reinforcement learning. In: Advances in Neural Information Processing Systems, pp. 1017–1023 (1996)

    Google Scholar 

  9. Sutton, R., Barto, A.: Introduction to Reinforcement Learning. MIT Press, Cambridge (2018)

    MATH  Google Scholar 

  10. Li, S., Xu, L., Zhao, S.: 5G internet of things: a survey. J. Ind. Inf. Integr. 10, 1–9 (2018)

    Google Scholar 

  11. Lillicrap, T., et al.: Continuous control with deep reinforcement learning. CoRR abs/1509.02971 (2015)

    Google Scholar 

  12. Mnih, V., et al.: Playing Atari with deep reinforcement learning. NeurlIPS (2013)

    Google Scholar 

  13. Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  14. Sun, W., Ström, E., Brännström, F., Sou, K., Sui, Y.: Radio resource management for D2D-based V2V communication. IEEE Trans. Veh. Technol. 65(8), 6636–6650 (2015)

    Article  Google Scholar 

  15. Zhan, W., Luo, C., Wang, J., Wang, C., Min, G., Duan, H., Zhu, Q.: Deep reinforcement learning-based offloading scheduling for vehicular edge computing. IEEE Internet Things J. 7(6), 5449–5465 (2020)

    Article  Google Scholar 

Download references

Acknowledgement

This work was supported in part by the National Natural Science Foundation of China under Grant 61771343, and the Intelligent Connected Vehicle Pilot Demonstration Project under grant 2019B090912002.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chao Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yu, S., Qu, N., Zhang, Y., Wang, C., Liu, F. (2021). A Transmission Design via Reinforcement Learning for Delay-Aware V2V Communications. In: Gao, H., Fan, P., Wun, J., Xiaoping, X., Yu, J., Wang, Y. (eds) Communications and Networking. ChinaCom 2020. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 352. Springer, Cham. https://doi.org/10.1007/978-3-030-67720-6_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67720-6_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67719-0

  • Online ISBN: 978-3-030-67720-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics