Skip to main content
Log in

The Flisvos-2017 multi-agent system

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

This paper presents the workings of the Flisvos-2017 multi-agent system that participated in the Multi-Agent Programming Contest MAPC 2017 of Clausthal TU.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Hindriks, K.V., Dix J.: GOAL: A Multi-Agent Programming Language Applied to an Exploration Game. In: Shehory O., Sturm A. (eds.) Research Directions Agent-Oriented Software Engineering, pp. 112–136 (2013). https://multiagentcontest.org/publications/AppliedGOAL.pdf

  2. Boscoe, F.P., Henry, K.A., Zdeb, M.S.: A nationwide comparison of driving distance versus straight-line distance to hospitals. Prof. Geogr. 64(2), 188–196 (2012). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3835347

    Article  Google Scholar 

  3. Russell, S.J., Norvig, P.: Artificial intelligence: A modern approach. 3rd ed., section 2.4, Pearson (2012)

  4. Dix, J.: Lecture Notes: Multiagent Systems I, Clausthal TU (2012). https://www.in.tu-clausthal.de/uploads/media/Multiagent_Systems_I-2012.pdf

  5. Pinedo, M., Chao, X.: Operations scheduling with applications in manufacturing and services, chapters 3 and 5. Irwin/McGraw-Hill, McGraw-Hill Singapore (1999). ISBN: 0-07-116675-0

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendix: Questionnaire by the Organizers

Appendix: Questionnaire by the Organizers

1.1 1.1 Participants and their background

ᅟ:

What was your motivation to participate in the contest?

I had participated in the past in similar ACM Queue ICPC challenges, which always excited me, and while I was researching about Clausthal TU in Wikipedia in 2016 and saw a mention of this contest I made a decision on a whim to participate! Since then I hope to participate every year as I find this contest interesting.

ᅟ:

What is the history of your group? (course project, thesis, ...)

The group is just one person until now.

ᅟ:

What is your field of research? Which work therein is related?

I am not a professional researcher, but a professional Computer Scientist in the industry who tries new things.

1.2 1.2 The cold hard facts

ᅟ:

How much time did you invest in the contest (for programming, organizing your group, other)?

A good estimate afterwards is about 120 man hours.

ᅟ:

How many lines of code did you produce for your final agent team?

The final code base is about 4950 true lines (excludes comments and blank lines). A certain portion (about 600 lines) is code for features not developed much and not used in the end or used in a few matches. Another portion (about 350 lines) is header code for imports from other modules in order to have a clear view of module dependencies. A big portion (very difficult to estimate) is logging statements (where most are at least 3 code lines each due to statement continuation).

ᅟ:

How many people were involved?

One.

ᅟ:

When did you start working on your agents?

I started in late April 2017 to study the new scenario and slowly modify the code of the previous contest. This was not consistent and there were gaps of even 3 weeks between development days. In August the pace became more steady and finally 3 days before the contest there was a first working version!

1.3 1.3 Strategies and details

ᅟ:

What is the main strategy of your agent team?

It’s twofold: to select the most profit making jobs where all ingredient items (base or assembled) are immediately available (either on board or by buying by agents already in a shop) and complete them as fast as possible, and then to spread any remaining inactive agents over shops so that they are ready for future jobs. Occasionally agents will buy preemptively items in order to have a cache of base items or starve the other team (this was not explored adequately).

ᅟ:

How does the team work together? (coordination, information sharing, …)

There is no specific coordination programmed rather common code (as a common mind) is run that makes decisions for all agents. There is implicit coordination by predicting what other agents will do (because every agent runs the same algorithm which plans actions for all agents) and arbitrating agent work by their rank.

ᅟ:

What are critical components of your team?

The good working of the code implementing the strategy expressed before.

ᅟ:

Can your agents change their behavior during runtime? If so, what triggers the changes?

There is no specific reflex behavior except planned sequences of actions in order to explore shops and to assemble and deliver for a job. The job actions will be dropped when the job is no longer active and the agents will be available to be considered for new actions.

ᅟ:

Did you have to make changes to the team during the contest?

No, but some parameters were changed and a module was enabled in some matches (explore_buys_forced/explore_buys).

ᅟ:

How do you organize your agents? Do you use e.g. hierarchies? Is your organization implicit or explicit?

There are no hierarchies. All agents are capable to do the same tasks.

ᅟ:

Is most of your agents’ behavior emergent on an individual or team level?

Most of the behavior is emergent on a team level.

ᅟ:

If your agents perform some planning, how many steps do they plan ahead?

Planning is for a task. The longest is an assembly task when inside a shop with: buy sequence, recharge, goto charging station, charge, goto assembly workshop, assemble/assist_assemble sequence, for a total of 9 + actions (for 4 buys) where each one takes one or many simulation steps.

ᅟ:

If you have a perceive-think-act cycle, how is it synchronized with the server?

Perceive starts with request-action and its percept payload and then follow think and act within the allowed time limit (artificially restricted to 1.5 sec), and the cycle repeats.

1.4 1.4 Scenario specifics

ᅟ:

How do your agents decide which jobs to fulfill?

The system considers only jobs for which all items necessary to complete them are available (either items delivered directly or items used in assembly to produce job items). From these jobs the most promising are selected using a few criteria based on efficacy (if the job can be assembled with items agents have on board or can buy, and can be delivered before the job expires), benefit (reward - buy cost), and number of already assembled items that are used for job fulfillment (items that were made but became unused when previous jobs expired before being able to complete them). [see procedure new_jobs]

ᅟ:

Do you have different strategies for the different roles?

No. All vehicle roles function the same.

ᅟ:

Do your agents form ad-hoc teams for each job?

No. There is a single team really. The fact that at certain times a few agents work for the same job does not constitute an ad-hoc team really. This statement may seem counter-intuitive! After all, some agents must work together for a job. But, the participating agents have a fixed plan of actions made for them and they do not communicate between themselves for job management. So, they do not form a team.

ᅟ:

What do your agents do when they do not pursue any job?

They either stay idle inside a shop or they move to a shop with job potential that matches better their load capacity and in a way that all agents are evenly spread over all shops. Optionally (if enabled) some agents buy preemptively items so there is a cache of items ready to use for new jobs (and thus avoid job starvation or force the other team into starvation).

ᅟ:

How did you go about debugging your system?

By extensive assertion checking and classic logging. Logging was organized in a main log view of small size with team/agents summary of state and actions/planning decisions summary for each step, and a more extensive log by agent. A desirable feature for this and future contests, but not possible to implement for this one person team, is a way to produce an analyzed log file with human like interpretation of actions and the dynamics of the match (dynamics means things like speed of job completion, if there was unnecessary delay for charging or recharging, identify critical paths, underused agents, etc.). In short a kind of fusion information that combines many steps and agent data, and provides insights on shortcomings or advantages of the team implementation. I think this is a research problem in its own.

ᅟ:

What were prominent questions you would have asked your system during development? (i.e. “why did you just do X?”)

The most prominent question was how inactive agents where exploring shops (their distribution and mobility level – because a moving agent is not ready for a new buy and so the ability to start immediately a new job is reduced).

1.5 1.5 And the moral of it is . . .

ᅟ:

What did you learn from participating in the contest?

I verified many software engineering teachings and how difficult it is to properly make a 4000 + line code work right and updated knowledge on many algorithms. I also confirmed for the nth time that there is always a competitor with similar or same strategy.

ᅟ:

What are the strong and weak points of your team?

The strong point is that it is efficient and adaptable. The weak point is that some design/configuration decisions depend on the simulation parameters (number and distribution of items/facilities, job payoff, etc.). This is a weak point but not an inadequacy as an agent system is designed for a specific environment.

ᅟ:

How viable were your chosen programming language, methodology, tools, and algorithms?

The chosen language was perfect as it enabled quick development and code refactoring. The fact that it is slower than other common languages was no problem as it is still fast enough and when it got slow it forced a reviewing and improving of the algorithms. The methodology was good as it could handle the scenario well.

ᅟ:

Did you encounter new problems during the contest?

None.

ᅟ:

Did playing against other agent teams bring about new insights on your own agents?

Yes, definitely! It’s different when playing, the mind somehow perceives strategies and pitfalls more clearly.

ᅟ:

What would you improve if you wanted to participate in the same contest a week from now (or next year)?

In a week, I would add the capability to handle auction jobs and next year I would improve the experimental preemptive buying of items.

ᅟ:

Which aspect of your team cost you the most time?

Even though a major amount of code was carried over from the previous contest, a considerable time was spent on code refactorings and improvements. The refactorings were not necessary for this contest except that this author did not think strategically and is stubbornly insistent on code quality, clarity, self documenting code, adaptability and software construction in general and this cost in time lost that could be used for developing new features and analyzing the contest scenario dynamic (e.g. that auction jobs should be handled).

ᅟ:

What can be improved regarding the contest/scenario for next year?

For the simulation: a monitor that provides a better view of the state of the match and the activities going on in separate tiled windows without need to click on scenario entities.For the contest: friendly matches before and after the contest, it should be stated more specifically what is allowed/forbidden for the agent implementations and the matches, e.g. restarts, remote control, use of web-services etc.For the scenario: a single feature relating to a specific research area should be promoted and be the most difficult aspect of the game and the rest of the scenario should be rather easy to handle (the idea is to promote a specific research but not make participation very costly for teams).

ᅟ:

Why did your team perform as it did? Why did the other teams perform better/worse than you did?

For this team, it’s the cumulative result of a robust implementation and a good overall design that is matched to the simulation environment (a design utilizing not the best but good and effective solutions). Fast and efficient completion of jobs gave advantage but not handling auction jobs was a major disadvantage combined with incomplete development of the explore_buys_forced and explore_buys procedures. I also believe that there is a strong element of randomness/luck.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sarmas, E.I. The Flisvos-2017 multi-agent system. Ann Math Artif Intell 84, 35–56 (2018). https://doi.org/10.1007/s10472-018-9587-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-018-9587-9

Keywords

Mathematics Subject Classification (2010)

Navigation