Abstract
Demand-Responsive Transport (DRT) is an innovative mode of public transportation that focuses on individual passenger needs by offering customized transportation solutions. Most prior researches rely on historical passenger flow to generate static schemes and lack the optimization of optional dynamic demands from the perspectives of passengers and transportation agencies simultaneously. Therefore, this paper addresses the dynamic scheduling optimization problem of DRT under mixed demand, minimizing overall system costs and ensuring equitable passenger waiting times. We initially construct a dual-objective optimization model for DRT dynamic scheduling to solve this. Subsequently, we propose the Action-Refinement Multi-Agent Dueling Double Deep Q-Network (AR-MAD3QN) algorithm to tackle the challenge of simultaneous route optimization for a fleet of vehicles considering static and optional dynamic passenger demands under dynamic road conditions. Additionally, the action-refinement module improves the network structure of MAD3QN, preventing the generation of invalid and unstable actions and improving training efficiency. Experiments are conducted on the Sioux Falls network, with the AR-MAD3QN algorithm compared against baseline algorithms in different settings. The results show that our AR-MAD3QN algorithm exhibits superior optimization with faster and more stable convergence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Castagna, A., Guériau, M., Vizzari, G., Dusparic, I.: Demand-responsive rebalancing zone generation for reinforcement learning-based on-demand mobility. AI Commun. 34(1), 73–88 (2021)
Chakirov, A., Fourie, P.J.: Enriched sioux falls scenario with dynamic and disaggregate demand. Arbeitsberichte Verkehrs-und Raumplanung 978 (2014)
Dikas, G., Minis, I.: Scheduled paratransit transport systems. Trans. Res. Part B: Methodological 67, 18–34 (2014)
Fang, J., Sun, Q., Chen, Y., Tang, Y.: Quadrotor navigation in dynamic environments with deep reinforcement learning. Assem. Autom. 41(3), 254–262 (2021)
Gkiotsalitis, K., Stathopoulos, A.: Demand-responsive public transportation re-scheduling for adjusting to the joint leisure activity demand. Int. J. Trans. Sci. Technol. 5(2), 68–82 (2016)
Guan, D., Wu, X., Wang, K., Zhao, J.: Vehicle dispatch and route optimization algorithm for demand-responsive transit. Processes 10(12), 2651 (2022)
Han, S., Fu, H., Zhao, J., Lin, J., Zeng, W.: Modelling and simulation of hierarchical scheduling of real-time responsive customised bus. IET Intel. Trans. Syst. 14(12), 1615–1625 (2020)
Hong, Y., Jin, Y., Tang, Y.: Rethinking individual global max in cooperative multi-agent reinforcement learning. Adv. Neural. Inf. Process. Syst. 35, 32438–32449 (2022)
Hu, L., Dong, J.: An artificial-neural-network-based model for real-time dispatching of electric autonomous taxis. IEEE Trans. Intell. Transp. Syst. 23(2), 1519–1528 (2020)
Lei, Y., Lu, G., Zhang, H., He, B., Fang, J.: Optimizing total passenger waiting time in an urban rail network: a passenger flow guidance strategy based on a multi-agent simulation approach. Simul. Model. Pract. Theory 117, 102510 (2022)
Li, W., Zheng, L., Liao, L., Yang, X., Sun, D., Liu, W.: A multiline customized bus planning method based on reinforcement learning and spatiotemporal clustering algorithm. IEEE Trans. Comput. Soc. Syst. (2023)
Liu, Z., Li, J., Wu, K.: Context-aware taxi dispatching at city-scale using deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. 23(3), 1996–2009 (2020)
Ma, Y., et al.: A hierarchical reinforcement learning based optimization framework for large-scale dynamic pickup and delivery problems. Adv. Neural. Inf. Process. Syst. 34, 23609–23620 (2021)
Mao, C., Liu, Y., Shen, Z.J.M.: Dispatch of autonomous vehicles for taxi services: a deep reinforcement learning approach. Trans. Res. Part C: Emerg. Technol. 115, 102626 (2020)
Qi, X., Luo, Y., Wu, G., Boriboonsomsin, K., Barth, M.: Deep reinforcement learning enabled self-learning control for energy efficient driving. Trans. Res. Part C: Emerg. Technol. 99, 67–81 (2019)
Song, C.Y., Wang, H.L., Chen, L., Niu, X.Q.: An optimized two-phase demand-responsive transit scheduling model considering dynamic demand. IET Intell. Trans. Syst. (2023)
Wang, H., Li, J., Wang, P., Teng, J., Loo, B.P.: Adaptability analysis methods of demand responsive transit: a review and future directions. Transp. Rev. 43(4), 676–697 (2023)
Wang, J., et al.: Cooperative and competitive multi-agent systems: from optimization to games. IEEE/CAA J. Automatica Sin. 9(5), 763–783 (2022)
Wu, B., Zuo, X., Chen, G., Ai, G., Wan, X.: Multi-agent deep reinforcement learning based real-time planning approach for responsive customized bus routes. Comput. Ind. Eng. 109840 (2023)
Xiong, L., et al.: Interpretable deep reinforcement learning for optimizing heterogeneous energy storage systems. Regul. Pap. IEEE Trans. Circuits Syst. I (2023)
Yu, Y., Machemehl, R.B., Xie, C.: Demand-responsive transit circulator service network design. Trans. Res. Part E: Logistics Trans. Rev. 76, 160–175 (2015)
Zhou, M., et. al.: Multi-agent reinforcement learning for order-dispatching via order-vehicle distribution matching. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 2645–2653 (2019)
Acknowledgements
This work was supported by National Key Research and Development Program of China (2021YFB1714300), National Natural Science Foundation of China (62233005, 62136003), Fundamental Research Funds for the Central Universities (222202417006) and Shanghai AI Lab.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, J., Li, Y., Sun, Q., Tang, Y. (2024). Demand-Responsive Transport Dynamic Scheduling Optimization Based on Multi-agent Reinforcement Learning Under Mixed Demand. In: Wand, M., Malinovská, K., Schmidhuber, J., Tetko, I.V. (eds) Artificial Neural Networks and Machine Learning – ICANN 2024. ICANN 2024. Lecture Notes in Computer Science, vol 15019. Springer, Cham. https://doi.org/10.1007/978-3-031-72341-4_24
Download citation
DOI: https://doi.org/10.1007/978-3-031-72341-4_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72340-7
Online ISBN: 978-3-031-72341-4
eBook Packages: Computer ScienceComputer Science (R0)