skip to main content
10.1145/3421537.3421547acmotherconferencesArticle/Chapter ViewAbstractPublication PagesbdiotConference Proceedingsconference-collections
research-article

Context-Aware Optimal Charging Distribution using Deep Reinforcement Learning

Authors Info & Claims
Published:05 October 2020Publication History

ABSTRACT

The expansion of charging infrastructure and the optimal utilization of existing infrastructure are key influencing factors for the future growth of electric mobility. The main objective of this paper is to present a novel methodology which identifies the necessary stakeholders, processes their contextual information and meets their optimality criteria using a constraint satisfaction strategy. A deep reinforcement learning algorithm is used for optimally distributing the electric vehicle charging resources in a smart-mobility ecosystem. The algorithm performs context-aware, constrained-optimization such that the on-demand requests of each stakeholder, e.g., vehicle owner as end-user, grid-operator, fleet-operator, charging-station service operator, is fulfilled. In the proposed methodology, the system learns from the surrounding environment until the optimal charging resource allocation strategy within the limitations of the system constraints is reached. We look at the concept of optimality from the perspective of multiple stakeholders who participate in the smart-mobility ecosystem. A simple use case is presented in detail. Finally, we discuss the potential to develop this concept further to enable more complex digital interactions between the actors of a smart-mobility eco-system.

References

  1. VDA. Electric Mobility: Electric Mobility in Germany, 2020.Google ScholarGoogle Scholar
  2. VDA. Electric Mobility: Charging Infrastructure, 2020.Google ScholarGoogle Scholar
  3. Philipp Danner, Wolfgang Duschl, Dominik Danner, Ammar Alyousef, and Hermann de Meer. Flexibility reward scheme for grid-friendly electric vehicle charging in the distribution power grid. In Proceedings of the Ninth International Conference on Future Energy Systems, eEnergy '18, page 564--569, New York, NY, USA, 2018. Association for Computing Machinery.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Hussein Dia. Rethinking Urban Mobility: Unlocking the Benefits of Vehicle Electrification. In Peter Newton, Deo Prasad, Alistair Sproul, and Stephen White, editors, Decarbonising the Built Environment, volume 2416, pages 83--98. Springer Singapore, Singapore, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  5. Ngoc Duy Nguyen, Thanh Thi Nguyen, Hai Nguyen, and Saeid Nahavandi. Review, analyze, and design a comprehensive deep reinforcement learning framework, 2020.Google ScholarGoogle Scholar
  6. VincentFrançois-Lavet, PeterHenderson, RiashatIslam, MarcG.Belle-mare, and Joelle Pineau. An introduction to deep reinforcement learning. CoRR, abs/1811.12560, 2018.Google ScholarGoogle Scholar
  7. Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization, 2017.Google ScholarGoogle Scholar
  8. JaneLin, WeiZhou, andOuriWolfson.Electricvehicleroutingp roblem. Transportation Research Procedia, 12(Supplement C):508--521, 2016.Google ScholarGoogle Scholar
  9. Tao Chen, Bowen Zhang, Hajir Pourbabak, Abdollah Kavousi-Fard, and Wencong Su. Optimal routing and charging of an electric vehicle fleet for high-efficiency dynamic transit systems. IEEE Transactions on Smart Grid, 9(4):3563--3572, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  10. Sandford Bessler and Jesper Grønbæk. Routing EV users towards an optimal charging plan. World Electric Vehicle Journal, 5(3):688--695, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  11. Yunhao Tang, Shipra Agrawal, and Yuri Faenza. Reinforcement learning for integer programming: Learning to cut. CoRR, abs/1906.04859, 2019.Google ScholarGoogle Scholar
  12. Claudio Biancalana, Andrea Flamini, Fabio Gasparetti, Alessandro Micarelli, Samuele Millevolte, and Giuseppe Sansonetti. Enhancing traditional local search recommendations with context-awareness. In Joseph A. Konstan, Ricardo Conejo, José L. Marzo, and Nuria Oliver, editors, User Modeling, Adaption and Personalization, pages 335--340, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg.Google ScholarGoogle ScholarCross RefCross Ref
  13. Ammar Alyousef, Friederich Kupzog, Dominik Danner, and Hermann de Meer. Enhancing power quality in electrical distribution systems using a smart charging architecture. New York, NY, USA, 2018.Google ScholarGoogle Scholar
  14. K. Li, T. Zhang, and R. Wang. Deep reinforcement learning for multi-objective optimization. IEEE Transactions on Cybernetics, pages 1--12, 2020.Google ScholarGoogle Scholar
  15. Thanh Thi Nguyen. A multi-objective deep reinforcement learning framework. CoRR, abs/1803.02965, 2018.Google ScholarGoogle Scholar
  16. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: Scalable distributed deep-RL with importance weighted actor-learner architectures. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1407--1416, Stockholmsmässan, Stockholm Sweden, 10--15 Jul 2018. PMLR.Google ScholarGoogle Scholar
  17. Zihan Jiang, Wanling Gao, Lei Wang, Xingwang Xiong, Yuchen Zhang, Xu Wen, Chunjie Luo, Hainan Ye, Xiaoyi Lu, Yunquan Zhang, Shengzhong Feng, Kenli Li, Weijia Xu, and Jianfeng Zhan. Hpc ai500: A benchmark suite for hpc ai systems. In Chen Zheng and Jianfeng Zhan, editors, Benchmarking, Measuring, and Optimizing, pages 10--22, Cham, 2019. Springer International Publishing.Google ScholarGoogle Scholar
  18. K. S. Amogh Vardhan, M. Jakaraddi, D. S. G., J. Shetty, A. Chala, and D. Camper. Design and development of iot plugin for hpcc systems. In 2019 IEEE 4th International Conference on Big Data Analytics (ICBDA), pages 158--162, 2019.Google ScholarGoogle ScholarCross RefCross Ref
  19. Muddsair Sharif, Siegfried Mercelis, Wim Van Den Bergh, and Peter Hellinckx. Towards real-time smart road construction: Efficient process management through the implementation of internet of things. In Proceedings of the International Conference on Big Data and Internet of Thing, BDIOT2017, page 174--180, New York, NY, USA, 2017. Association for Computing Machinery.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Context-Aware Optimal Charging Distribution using Deep Reinforcement Learning

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        BDIOT '20: Proceedings of the 2020 4th International Conference on Big Data and Internet of Things
        August 2020
        108 pages
        ISBN:9781450375504
        DOI:10.1145/3421537

        Copyright © 2020 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 5 October 2020

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate75of136submissions,55%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader