Skip to main content

Demand-Responsive Transport Dynamic Scheduling Optimization Based on Multi-agent Reinforcement Learning Under Mixed Demand

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2024 (ICANN 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15019))

Included in the following conference series:

  • 673 Accesses

Abstract

Demand-Responsive Transport (DRT) is an innovative mode of public transportation that focuses on individual passenger needs by offering customized transportation solutions. Most prior researches rely on historical passenger flow to generate static schemes and lack the optimization of optional dynamic demands from the perspectives of passengers and transportation agencies simultaneously. Therefore, this paper addresses the dynamic scheduling optimization problem of DRT under mixed demand, minimizing overall system costs and ensuring equitable passenger waiting times. We initially construct a dual-objective optimization model for DRT dynamic scheduling to solve this. Subsequently, we propose the Action-Refinement Multi-Agent Dueling Double Deep Q-Network (AR-MAD3QN) algorithm to tackle the challenge of simultaneous route optimization for a fleet of vehicles considering static and optional dynamic passenger demands under dynamic road conditions. Additionally, the action-refinement module improves the network structure of MAD3QN, preventing the generation of invalid and unstable actions and improving training efficiency. Experiments are conducted on the Sioux Falls network, with the AR-MAD3QN algorithm compared against baseline algorithms in different settings. The results show that our AR-MAD3QN algorithm exhibits superior optimization with faster and more stable convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Castagna, A., Guériau, M., Vizzari, G., Dusparic, I.: Demand-responsive rebalancing zone generation for reinforcement learning-based on-demand mobility. AI Commun. 34(1), 73–88 (2021)

    Article  MathSciNet  Google Scholar 

  2. Chakirov, A., Fourie, P.J.: Enriched sioux falls scenario with dynamic and disaggregate demand. Arbeitsberichte Verkehrs-und Raumplanung 978 (2014)

    Google Scholar 

  3. Dikas, G., Minis, I.: Scheduled paratransit transport systems. Trans. Res. Part B: Methodological 67, 18–34 (2014)

    Article  Google Scholar 

  4. Fang, J., Sun, Q., Chen, Y., Tang, Y.: Quadrotor navigation in dynamic environments with deep reinforcement learning. Assem. Autom. 41(3), 254–262 (2021)

    Article  Google Scholar 

  5. Gkiotsalitis, K., Stathopoulos, A.: Demand-responsive public transportation re-scheduling for adjusting to the joint leisure activity demand. Int. J. Trans. Sci. Technol. 5(2), 68–82 (2016)

    Article  Google Scholar 

  6. Guan, D., Wu, X., Wang, K., Zhao, J.: Vehicle dispatch and route optimization algorithm for demand-responsive transit. Processes 10(12), 2651 (2022)

    Article  Google Scholar 

  7. Han, S., Fu, H., Zhao, J., Lin, J., Zeng, W.: Modelling and simulation of hierarchical scheduling of real-time responsive customised bus. IET Intel. Trans. Syst. 14(12), 1615–1625 (2020)

    Article  Google Scholar 

  8. Hong, Y., Jin, Y., Tang, Y.: Rethinking individual global max in cooperative multi-agent reinforcement learning. Adv. Neural. Inf. Process. Syst. 35, 32438–32449 (2022)

    Google Scholar 

  9. Hu, L., Dong, J.: An artificial-neural-network-based model for real-time dispatching of electric autonomous taxis. IEEE Trans. Intell. Transp. Syst. 23(2), 1519–1528 (2020)

    Article  Google Scholar 

  10. Lei, Y., Lu, G., Zhang, H., He, B., Fang, J.: Optimizing total passenger waiting time in an urban rail network: a passenger flow guidance strategy based on a multi-agent simulation approach. Simul. Model. Pract. Theory 117, 102510 (2022)

    Article  Google Scholar 

  11. Li, W., Zheng, L., Liao, L., Yang, X., Sun, D., Liu, W.: A multiline customized bus planning method based on reinforcement learning and spatiotemporal clustering algorithm. IEEE Trans. Comput. Soc. Syst. (2023)

    Google Scholar 

  12. Liu, Z., Li, J., Wu, K.: Context-aware taxi dispatching at city-scale using deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. 23(3), 1996–2009 (2020)

    Article  Google Scholar 

  13. Ma, Y., et al.: A hierarchical reinforcement learning based optimization framework for large-scale dynamic pickup and delivery problems. Adv. Neural. Inf. Process. Syst. 34, 23609–23620 (2021)

    Google Scholar 

  14. Mao, C., Liu, Y., Shen, Z.J.M.: Dispatch of autonomous vehicles for taxi services: a deep reinforcement learning approach. Trans. Res. Part C: Emerg. Technol. 115, 102626 (2020)

    Article  Google Scholar 

  15. Qi, X., Luo, Y., Wu, G., Boriboonsomsin, K., Barth, M.: Deep reinforcement learning enabled self-learning control for energy efficient driving. Trans. Res. Part C: Emerg. Technol. 99, 67–81 (2019)

    Article  Google Scholar 

  16. Song, C.Y., Wang, H.L., Chen, L., Niu, X.Q.: An optimized two-phase demand-responsive transit scheduling model considering dynamic demand. IET Intell. Trans. Syst. (2023)

    Google Scholar 

  17. Wang, H., Li, J., Wang, P., Teng, J., Loo, B.P.: Adaptability analysis methods of demand responsive transit: a review and future directions. Transp. Rev. 43(4), 676–697 (2023)

    Article  Google Scholar 

  18. Wang, J., et al.: Cooperative and competitive multi-agent systems: from optimization to games. IEEE/CAA J. Automatica Sin. 9(5), 763–783 (2022)

    Article  Google Scholar 

  19. Wu, B., Zuo, X., Chen, G., Ai, G., Wan, X.: Multi-agent deep reinforcement learning based real-time planning approach for responsive customized bus routes. Comput. Ind. Eng. 109840 (2023)

    Google Scholar 

  20. Xiong, L., et al.: Interpretable deep reinforcement learning for optimizing heterogeneous energy storage systems. Regul. Pap. IEEE Trans. Circuits Syst. I (2023)

    Google Scholar 

  21. Yu, Y., Machemehl, R.B., Xie, C.: Demand-responsive transit circulator service network design. Trans. Res. Part E: Logistics Trans. Rev. 76, 160–175 (2015)

    Article  Google Scholar 

  22. Zhou, M., et. al.: Multi-agent reinforcement learning for order-dispatching via order-vehicle distribution matching. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 2645–2653 (2019)

    Google Scholar 

Download references

Acknowledgements

This work was supported by National Key Research and Development Program of China (2021YFB1714300), National Natural Science Foundation of China (62233005, 62136003), Fundamental Research Funds for the Central Universities (222202417006) and Shanghai AI Lab.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yang Tang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, J., Li, Y., Sun, Q., Tang, Y. (2024). Demand-Responsive Transport Dynamic Scheduling Optimization Based on Multi-agent Reinforcement Learning Under Mixed Demand. In: Wand, M., Malinovská, K., Schmidhuber, J., Tetko, I.V. (eds) Artificial Neural Networks and Machine Learning – ICANN 2024. ICANN 2024. Lecture Notes in Computer Science, vol 15019. Springer, Cham. https://doi.org/10.1007/978-3-031-72341-4_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72341-4_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72340-7

  • Online ISBN: 978-3-031-72341-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics