skip to main content
10.1145/3583780.3615072acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Target-Oriented Maneuver Decision for Autonomous Vehicle: A Rule-Aided Reinforcement Learning Framework

Published: 21 October 2023 Publication History

Abstract

Autonomous driving systems (ADSs) have the potential to revolutionize transportation by improving traffic safety and efficiency. As the core component of ADSs, maneuver decision aims to make tactical decisions to accomplish road following, obstacle avoidance, and efficient driving. In this work, we consider a typical but rarely studied task, called Target-Lane-Entering (TLE), where an autonomous vehicle should enter a target lane before reaching an intersection to ensure a smooth transition to another road. For navigation-assisted autonomous driving, a maneuver decision module chooses the optimal timing to enter the target lane in each road section, thus avoiding rerouting and reducing travel time. To achieve the TLE task, we propose a ruLe-aided reINforcement lEarning framework, called LINE, which combines the advantages of RL-based policy and rule-based strategy, allowing the autonomous vehicle to make target-oriented maneuver decisions. Specifically, an RL-based policy with a hybrid reward function is able to make safe, efficient, and comfortable decisions while considering the factors of target lanes. Then a strategy of rule revision aims to help the policy learn from intervention and block the risk of missing target lanes. Extensive experiments based on the SUMO simulator confirm the effectiveness of our framework. The results show that LINE achieves state-of-the-art driving performance with over 95% task success rate.

References

[1]
Szilárd Aradi. 2020. Survey of deep reinforcement learning for motion planning of autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems, Vol. 23, 2 (2020), 740--759.
[2]
Jin Chen, Guanyu Ye, Yan Zhao, Shuncheng Liu, Liwei Deng, Xu Chen, Rui Zhou, and Kai Zheng. 2022. Efficient Join Order Selection Learning with Graph-based Representation. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 97--107.
[3]
Jianyu Chen, Bodi Yuan, and Masayoshi Tomizuka. 2019b. Deep imitation learning for autonomous driving in generic urban scenarios with enhanced safety. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2884--2890.
[4]
Yilun Chen, Chiyu Dong, Praveen Palanisamy, Priyantha Mudalige, Katharina Muelling, and John M Dolan. 2019a. Attention-based hierarchical deep reinforcement learning for lane change behaviors in autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 0--0.
[5]
Kashyap Chitta, Aditya Prakash, Bernhard Jaeger, Zehao Yu, Katrin Renz, and Andreas Geiger. 2022. Transfuser: Imitation with transformer-based sensor fusion for autonomous driving. IEEE Transactions on Pattern Analysis and Machine Intelligence (2022).
[6]
Luca Cultrera, Lorenzo Seidenari, Federico Becattini, Pietro Pala, and Alberto Del Bimbo. 2020. Explaining autonomous driving by learning end-to-end visual attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 340--341.
[7]
Shengzhe Dai, Li Li, and Zhiheng Li. 2019. Modeling vehicle interactions via modified LSTM models for trajectory prediction. IEEE Access, Vol. 7 (2019), 38287--38296.
[8]
Jiqian Dong, Sikai Chen, Yujie Li, Runjia Du, Aaron Steinfeld, and Samuel Labi. 2021. Space-weighted information fusion using deep reinforcement learning: The context of tactical control of lane-changing autonomous vehicles and connectivity range assessment. Transportation Research Part C: Emerging Technologies, Vol. 128 (2021), 103192.
[9]
Leonard Evans. 1991. Traffic safety and the driver. Science Serving Society.
[10]
Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. 2020. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, Vol. 37, 3 (2020), 362--386.
[11]
Guy Hacohen and Daphna Weinshall. 2019. On the power of curriculum learning in training deep networks. In International Conference on Machine Learning. PMLR, 2535--2544.
[12]
Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. 2018. Rainbow: Combining improvements in deep reinforcement learning. In Proceedings of the AAAI conference on artificial intelligence, Vol. 32.
[13]
Carl-Johan Hoel, Katherine Driggs-Campbell, Krister Wolff, Leo Laine, and Mykel J Kochenderfer. 2019. Combining planning and deep reinforcement learning in tactical decision making for autonomous driving. IEEE transactions on intelligent vehicles, Vol. 5, 2 (2019), 294--305.
[14]
Zhiyu Huang, Jingda Wu, and Chen Lv. 2022. Efficient deep reinforcement learning with imitative expert priors for autonomous driving. IEEE Transactions on Neural Networks and Learning Systems (2022).
[15]
Arne Kesting, Martin Treiber, and Dirk Helbing. 2010. Enhanced intelligent driver model to access the impact of driving strategies on traffic capacity. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 368, 1928 (2010), 4585--4605.
[16]
B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A Al Sallab, Senthil Yogamani, and Patrick Pérez. 2021. Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, Vol. 23, 6 (2021), 4909--4926.
[17]
Luc Le Mero, Dewei Yi, Mehrdad Dianati, and Alexandros Mouzakitis. 2022. A survey on imitation learning techniques for end-to-end autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems (2022).
[18]
Shuncheng Liu, Han Su, Yan Zhao, Kai Zeng, and Kai Zheng. 2021. Lane change scheduling for autonomous vehicle: A prediction-and-search framework. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 3343--3353.
[19]
Pablo Alvarez Lopez, Michael Behrisch, Laura Bieker-Walz, Jakob Erdmann, Yun-Pang Flötteröd, Robert Hilbrich, Leonhard Lücken, Johannes Rummel, Peter Wagner, and Evamarie Wießner. 2018. Microscopic traffic simulation using sumo. In 2018 21st international conference on intelligent transportation systems (ITSC). IEEE, 2575--2582.
[20]
Branka Mirchevska, Christian Pek, Moritz Werling, Matthias Althoff, and Joschka Boedecker. 2018. High-level decision making for safe and reasonable autonomous lane changing using reinforcement learning. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2156--2162.
[21]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013).
[22]
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature, Vol. 518, 7540 (2015), 529--533.
[23]
Sanmit Narvekar and Peter Stone. 2018. Learning curriculum policies for reinforcement learning. arXiv preprint arXiv:1812.00285 (2018).
[24]
Daniel E Rivera, Manfred Morari, and Sigurd Skogestad. 1986. Internal model control: PID controller design. Industrial & engineering chemistry process design and development, Vol. 25, 1 (1986), 252--265.
[25]
Ahmad El Sallab, Mohammed Abdou, Etienne Perot, and Senthil Yogamani. 2016. End-to-end deep reinforcement learning for lane keeping assist. arXiv preprint arXiv:1612.04340 (2016).
[26]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, Vol. 15, 1 (2014), 1929--1958.
[27]
Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press.
[28]
Chris Urmson, Joshua Anhalt, Drew Bagnell, Christopher Baker, Robert Bittner, MN Clark, John Dolan, Dave Duggins, Tugrul Galatali, Chris Geyer, et al. 2008. Autonomous driving in urban environments: Boss and the urban challenge. Journal of field Robotics, Vol. 25, 8 (2008), 425--466.
[29]
Ardalan Vahidi and Azim Eskandarian. 2003. Research advances in intelligent collision avoidance and adaptive cruise control. IEEE transactions on intelligent transportation systems, Vol. 4, 3 (2003), 143--153.
[30]
Junjie Wang, Qichao Zhang, Dongbin Zhao, and Yaran Chen. 2019. Lane change decision-making through deep reinforcement learning with rule-based constraints. In 2019 International Joint Conference on Neural Networks (IJCNN). IEEE, 1--6.
[31]
Pin Wang, Ching-Yao Chan, and Arnaud de La Fortelle. 2018a. A reinforcement learning based approach for automated lane change maneuvers. In 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 1379--1384.
[32]
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. 2016. Dueling network architectures for deep reinforcement learning. In International conference on machine learning. PMLR, 1995--2003.
[33]
Ziran Wang, Guoyuan Wu, and Matthew J Barth. 2018b. A review on cooperative adaptive cruise control (CACC) systems: Architectures, controls, and applications. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2884--2891.
[34]
Jingda Wu, Zhiyu Huang, Wenhui Huang, and Chen Lv. 2022. Prioritized experience-based reinforcement learning with human guidance for autonomous driving. IEEE Transactions on Neural Networks and Learning Systems (2022).
[35]
Yuyang Xia, Shuncheng Liu, Xu Chen, Zhi Xu, Kai Zheng, and Han Su. 2022. RISE: A Velocity Control Framework with Minimal Impacts based on Reinforcement Learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2210--2219.
[36]
Jiechao Xiong, Qing Wang, Zhuoran Yang, Peng Sun, Lei Han, Yang Zheng, Haobo Fu, Tong Zhang, Ji Liu, and Han Liu. 2018. Parametrized deep q-networks learning: Reinforcement learning with discrete-continuous hybrid action space. arXiv preprint arXiv:1810.06394 (2018).
[37]
Zhi Xu, Shuncheng Liu, Ziniu Wu, Xu Chen, Kai Zeng, Kai Zheng, and Han Su. 2021. PATROL: A Velocity Control Framework for Autonomous Vehicle via Spatial-Temporal Reinforcement Learning. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2271--2280.
[38]
Ekim Yurtsever, Jacob Lambert, Alexander Carballo, and Kazuya Takeda. 2020. A survey of autonomous driving: Common practices and emerging technologies. IEEE access, Vol. 8 (2020), 58443--58469.
[39]
Meixin Zhu, Yinhai Wang, Ziyuan Pu, Jingyun Hu, Xuesong Wang, and Ruimin Ke. 2020. Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving. Transportation Research Part C: Emerging Technologies, Vol. 117 (2020), 102662.

Cited By

View all
  • (2024)Parameterized Decision-Making with Multi-Modality Perception for Autonomous Driving2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00340(4463-4476)Online publication date: 13-May-2024
  • (2024)MODUS: An Impact-Aware Decision Framework with Adaptive Fusion for Connected Autonomous VehiclesDatabase Systems for Advanced Applications10.1007/978-981-97-5575-2_4(53-69)Online publication date: 2-Sep-2024
  • (2024)Imitation Learning Decision with Driving Style Tuning for Personalized Autonomous DrivingDatabase Systems for Advanced Applications10.1007/978-981-97-5575-2_15(220-231)Online publication date: 2-Sep-2024

Index Terms

  1. Target-Oriented Maneuver Decision for Autonomous Vehicle: A Rule-Aided Reinforcement Learning Framework

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CIKM '23: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management
      October 2023
      5508 pages
      ISBN:9798400701245
      DOI:10.1145/3583780
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 October 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. autonomous driving
      2. maneuver decision
      3. reinforcement learning

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      CIKM '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

      Upcoming Conference

      CIKM '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)77
      • Downloads (Last 6 weeks)9
      Reflects downloads up to 28 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Parameterized Decision-Making with Multi-Modality Perception for Autonomous Driving2024 IEEE 40th International Conference on Data Engineering (ICDE)10.1109/ICDE60146.2024.00340(4463-4476)Online publication date: 13-May-2024
      • (2024)MODUS: An Impact-Aware Decision Framework with Adaptive Fusion for Connected Autonomous VehiclesDatabase Systems for Advanced Applications10.1007/978-981-97-5575-2_4(53-69)Online publication date: 2-Sep-2024
      • (2024)Imitation Learning Decision with Driving Style Tuning for Personalized Autonomous DrivingDatabase Systems for Advanced Applications10.1007/978-981-97-5575-2_15(220-231)Online publication date: 2-Sep-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media