skip to main content
10.1145/3520304.3534054acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
abstract

AbstractSwarm multi-agent logistics competition entry: QPlus

Published:19 July 2022Publication History

ABSTRACT

The AbstractSwarm Framework [1] was developed to study multi-agent simulation for optimizing logistics scenarios, with a special focus on (but not restricted to) hospital logistics. The basics of a solution to the accompanying competition [2] and the included benchmark problems is to be presented in this paper. To achieve this, Q-Learning [5, 7, 8] is used to determine scores for possible actions an agent can take depending on the state of the AbstractSwarm Graph representing the environment. Every action receives multiple such scores which are in turn aggregated, hence the name "QPlus".

References

  1. Daan Apeldoorn. [n. d.]. AbstractSwarm - A Generic Graphical Modeling Language for Multi-Agent Systems. In Multiagent System Technologies. Vol. 8076. Springer Berlin Heidelberg, 180--192. Google ScholarGoogle ScholarCross RefCross Ref
  2. Daan Apeldoorn, Alexander Dockhorn, Lars Hadidi, and Torsten Panholzer. [n. d.]. AbstractSwarm Competition. https://abstractswarm.gitlab.io/abstractswarm_competition/ Accessed 2022-04-09.Google ScholarGoogle Scholar
  3. T. Kohonen. [n. d.]. The self-organizing map. 78, 9 ([n. d.]), 1464--1480. Google ScholarGoogle ScholarCross RefCross Ref
  4. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. [n. d.]. Human-level control through deep reinforcement learning. 518, 7540 ([n. d.]), 529--533. Google ScholarGoogle ScholarCross RefCross Ref
  5. Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement learning: An introduction. MIT press.Google ScholarGoogle Scholar
  6. Gerald Tesauro. [n. d.]. Extending Q-learning to general adaptive multi-agent systems. 16 ([n. d.]).Google ScholarGoogle Scholar
  7. C.J. C.H. Watkins. [n. d.]. Learning from delayed rewards. https://www.academia.edu/3294050/Learning_from_delayed_rewardsGoogle ScholarGoogle Scholar
  8. Christopher J. C. H. Watkins and Peter Dayan. [n. d.]. Q-learning. 8, 3--4 ([n. d.]), 279--292. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. AbstractSwarm multi-agent logistics competition entry: QPlus

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      GECCO '22: Proceedings of the Genetic and Evolutionary Computation Conference Companion
      July 2022
      2395 pages
      ISBN:9781450392686
      DOI:10.1145/3520304

      Copyright © 2022 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 July 2022

      Check for updates

      Qualifiers

      • abstract

      Acceptance Rates

      Overall Acceptance Rate1,669of4,410submissions,38%

      Upcoming Conference

      GECCO '24
      Genetic and Evolutionary Computation Conference
      July 14 - 18, 2024
      Melbourne , VIC , Australia
    • Article Metrics

      • Downloads (Last 12 months)18
      • Downloads (Last 6 weeks)4

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader