1 Introduction

The RoboCup Logistics League focuses on multi-robot coordination, through application of methods of, e.g., automated reasoning, planning, and scheduling. 2016 was a year of stabilization in terms of the rules of the game. The Carologistics team has made improvements to several software components, especially to the basic behaviors and the domain and behavior modeling for the incremental reasoning agent. In this paper, we report on the specific aspects we consider to be key to our repeated success. In particular the Gazebo-based simulation environment developed over the past few years has helped tremendously to manifest these improvements while also working on enhancements for functional components in parallel. We also describe our outreach efforts beyond the RCLL by establishing a simulation competition at the ICAPS conference with international partners from the planning community.

Fig. 1.
figure 1

Final round of the RoboCup logistics league 2016 in Leipzig, Germany. Teams carologistics (laptop on top) and Solidus (water bottle, background) are competing.

Our team has participated in RoboCup 2012–2016 and the RoboCup German Open (GO) 2013–2015. We were able to win the GO 2014 and 2015 as well as the RoboCup 2014, 2015, and 2016 (cf. Figure 1) in particular demonstrating flexible task coordination, robust collision avoidance and self-localization. We have publicly released our software stack used in 2016 in particular including our high-level reasoning componentsFootnote 1 [1].

This paper is based on last year’s edition [2] highlighting the specific advances and activities towards RoboCup 2016. For a description of the RCLL we refer to [3,4,5]. In Section 2 we give an overview of our hardware and software platform. We introduce some specific improvements to functional components in Sect. 3 before describing our behavior components in more detail in Sect. 4. We highlight our simulation in Sect. 5 before giving an overview of our continued contributions to the RCLL in Sect. 6 and concluding in Sect. 7.

2 The Carologistics Platform

The standard robot platform of this league is the Robotino by Festo Didactic [6]. The Robotino is developed for research and education and features omni-directional locomotion, a gyroscope and webcam, infrared distance sensors, and bumpers. The teams may equip the robot with additional sensors and computation devices as well as a gripper device for product handling [2].

Fig. 2.
figure 2

Carologistics Robotino 2015/2016

2.1 Hardware System

The robot system currently in use is based on the Robotino 3. The modified Robotino used by the Carologistics RoboCup team is shown in Fig. 2 and features two additional webcams, a RealSense depth camera and a Sick laser range finder. The webcam on top of the robot is used to recognize the machine signal lights, the one attached to the pillar of the robot is used to identify machine markers, and the depth camera below the robot’s gripper is used to recognize the conveyor belt. We use the Sick TiM571 laser scanner for collision avoidance and self-localization. It has a scanning range of 25 m at a resolution of 1/3\(^\circ \). An additional laptop increases the computation power and allows for more elaborate methods for self-localization, computer vision, and navigation.

Several parts were custom-made for our robot platform. Most notably, a gripper based on Festo fin-ray fingers and 3D-printed parts is used for product handling. It is able to adjust for slight lateral and height offsets using stepper motors for high positioning accuracy. The motor is controlled with an additional Arduino board with a motor shield. The motors smoothly increase and decrease speed to avoid positioning errors. As no encoders are attached, a micro switch for initializing the lateral axis position is used.

2.2 Software Frameworks

The software system of the Carologistics robots combines Fawkes [7] and ROS [8] allowing to use software components from both systems. The overall system, however, is integrated using Fawkes and ROS is used especially for its 3D visualization capabilities. The overall software structure is inspired by the three-layer architecture [9]. It consists of a deliberative layer for high-level reasoning, a reactive execution layer for breaking down high-level commands and monitoring their execution, and a feedback control layer for hardware access and functional components. The communication between single components – implemented as plugins – is realized by a hybrid blackboard and messaging approach [7].

The development is split into a core and domain-specific parts. The core framework is developed in public and has just seen its 1.0 stable release after ten years of development.Footnote 2 The RCLL domain-specific parts are developed in private and have been made available in the past three years. This has been awarded with the International Harting Open Source Award [10].

3 Advances to Functional Software Components

Here, we discuss some advancements made in 2016 to the plethora of different software components required to run a multi-robot system for the RCLL.

3.1 Basic Components

For this year, we have developed a module for direct communication with the Robotino microcontroller, fully bypassing and eliminating the need for OpenRobotino. A major issue was that OpenRobotino has no concept of time and the age of sensor and odometry data could not be determined once they arrived to our system. Furthermore, a new velocity and acceleration controller has been implemented resulting in smoother driving.

3.2 Driving

For the last years we have been using a stateless path planner with collision avoidance [11] which initially has been developed for the Middle Size League by the AllemaniACs team, which we have ported to the Fawkes framework and adapted to use the capability of a holonomic platform like the Robotino. Since the rule change in the RCLL to use the MPS machines in 2015 the playing field lost a great amount of free space which is furthermore often occluded by the MPS stations. This forced us to cope with new situations for our path planner. While we used a forward facing laser for the last years, this season we deployed a second backward facing laser which increased our field of view to 360\(^\circ \). This way obstacles could be seen longer, which resulted in a more stable path planning and realization. Furthermore using this, we could also start to drive backwards which decreased the driving time overall, but especially the time needed to leave an MPS after interfacing with it.

3.3 MPS Detection and Approaching

The MPS stations are detected in two ways, using the tag placed on the machines and a line fitting algorithm on the laser data. To approach the MPS during a game, first the tag detection is used to validate the correct machine and for a first rough alignment. In a second step the laser lines are used for a more precise alignment especially regarding the rotation. During the exploration phase both methods are used concurrently while searching for machines.

Fig. 3.
figure 3

Vision-based light-signal detection during production (post-processed for legibility) [2].

3.4 Light Signal Vision

A multi-modal perception component for robust detection of the light signal state on the field has been developed specifically for this domain [12]. It limits the search within the image by means of the laser-detected position of the machine as depicted in Fig. 3. This provides us with a higher robustness towards ambiguous backgrounds, for example colored shirts in the audience. Even if the machine cannot be detected, the vision features graceful degradation by using a geometric search heuristic to identify the signal, loosing some of the robustness towards the mentioned disturbances.

3.5 Conveyor Belt Detection

The conveyor belts are rather narrow compared to the products and thus require a precise handling. The tolerable error margin is in the range of about ± 3 mm. The marker on a machine allows to determine the lateral offset from the gripper to the conveyor belt. It gives a 3D pose of the marker with respect to the camera and thus the robot. However, this requires a precise calibration of the conveyor belt with respect to the marker. While ideally this would be the same for each

Fig. 4.
figure 4

Depth based conveyor belt detection. Left RGB picture, right point cloud with detected conveyor belt and its normal.

machine, in practice there is an offset which would need to be calibrated per station [2]. Therefore we are using the approach described in 3.3 for a pre-alignment which is then improved with a new depth based conveyor detection, where a point cloud from an Intel RealSense F200 camera is used to detect the conveyor as shown in Fig. 4. This is done by pruning the point cloud towards our region of interest by fusing the initial guess of the belt gathered by the machine position detected with the laser scanner. Afterwards a plane search is done to detect the precise pose of the front-plane of the conveyor belt and its normal.

Fig. 5.
figure 5

Behavior layer separation [13]

4 High-Level Decision Making and Task Coordination

The behavior generating components are separated into three layers, as depicted in Fig. 5: the low-level processing for perception and actuation, a mid-level reactive layer, and a high-level reasoning layer. The layers are combined following an adapted hybrid deliberative-reactive coordination paradigm.

The robot group needs to cooperate on its tasks, that is, the robots communicate information about their current intentions, acquire exclusive control over resources like machines, and share their beliefs about the current state of the environment. Currently, we employ a distributed, local-scope, and incremental reasoning approach [14]. This means that each robot determines only its own action (local scope) to perform next (incremental) and coordinates with the others through communication (distributed), as opposed to a central instance which plans globally for all robots at the same time or for multi-step plans.

In the following we describe the reactive and deliberative layers of the behavior components. For computational and energy efficiency, the behavior components need also to coordinate activation of the lower level components.

4.1 Lua-Based Behavior Engine

In previous work we have developed the Lua-based Behavior Engine (BE) [15]. It serves as the reactive layer to interface between the low- and high-level systems. The BE is based on hybrid state machines (HSM). They can be depicted as directed graphs with nodes representing states for action execution, and/or monitoring of actuation, perception, and internal state. Edges denote jump conditions implemented as Boolean functions. For the active state of a state machine, all outgoing conditions are evaluated, typically at about 15 Hz. If a condition fires, the active state is changed to the target node of the edge. A table of variables holds information like the world model, for example storing numeric values for object positions. It remedies typical problems of state machines like fast growing number of states or variable data passing from one state to another. Skills are implemented using the light-weight, extensible scripting language Lua.

Separating the different behaviors of the robot into small parts or skills improves maintainability. Each of these small skills can be put together to form greater and more complex behaviors while each parts stays a small maintainable component. This allows for the opportunity to tune the behaviors for specific situations.

Fig. 6.
figure 6

Used game-time assigned to different skills used during an (simulated) example game.

In 2016 we analyzed the time needed in each part of our skills to identify the time consuming behaviors. We then focused our efforts on improving these skills. Doing so, we decreased the time needed to align at a machine while at the same time we were able to increase the robustness of the alignment. In the example in Fig. 6, we eliminated the time in global_motor_move after the sub-skill motor_move returned and also the sub-calls to local movement via relgoto (red crossed circles).

4.2 Reasoning and Planning

The problem at hand with its intertwined world model updating and execution naturally lends itself to a representation as a fact base with update rules for triggering behavior for certain beliefs. We have chosen the CLIPS rules engine [16], because using incremental reasoning the robot can take the next best action at any point in time whenever the robot is idle. This avoids costly re-planning (as with approaches using classical planners) and it allows us to cope with incomplete knowledge about the world. Additionally, it is computationally inexpensive. More details about the general agent design and the CLIPS engine are in [17].

The agent for 2016 is based on the continued development effort of our CLIPS-based agent [13]. We have finalized generic world model synchronization capabilities that allow to mark specific facts in the fact base to be shared with other robots. While each robot acts as an own autonomous agent, a central robot dynamically determined through leader election is responsible for generating a consistent view and distributing it to all robots. This way each agent can still work autonomously if the connection to the other robots are interrupted and no information is lost if a single robot (or even the leader) fails. Furthermore, we increased the sophistication of our domain model. The robot group robustly produced lower complexity products during the competition and was able to achieve multiple deliveries per game. There were also partial productions of higher complexity products. Robots were also able to cooperate on MPS usage and we could minimize the MPS handover times.

We have evaluated several different possibilities for the implementation of agent programs in the RCLL including CLIPS, OpenPRS, and YAGI [17] and are making efforts towards a centralized global planning system.

5 Multi-robot Simulation in Gazebo

The character of the RCLL game emphasizes research and application of methods for efficient planning, scheduling, and reasoning on the optimal work order of production processes handled by a group of robots. An aspect that distinctly separates this league from others is that the environment itself acts as an agent by posting orders and controlling the machines’ reactions. This is what we call

Fig. 7.
figure 7

Simulation of the RCLL 2015 with MPS stations [14].

environment agency. In the RCLL, the large playing field and material costs are prohibitive for teams to set up a complete scenario for testing, let alone to have two teams of robots. Additionally, members of related communities like planning and reasoning might not want to deal with the full software and system complexity. Still they often welcome relevant scenarios to test and present their research. Therefore, we have created an open simulation environment [18, 19] based on GazeboFootnote 3 (see Fig. 7).

The simulation played a vital role to improve production performance in 2016. Especially the development of a new method of approaching an MPS station meant that experiments on real robots were not possible during several periods. Using the simulation, the domain designers could continuously work on enhancing the production with new capabilities and ensuring an overall robust action selection (through appropriate encoding and prioritization of situations to evaluate) and coordination for multiple robots. Moving from simulation to the real robots is facilitated by exchanging simulation by hardware accessing software components that deal with acquiring sensor data and commanding actuation. The middle and higher levels of the behavior separation are agnostic to this and remain unchanged. The simulation even models network packet loss as is to be expected during RoboCup to avoid overfitting to an environment behaving nicer than in reality. The simulation has also been used for a fully automated tournament of several different task-level executives [17].

5.1 Logistics Robots Planning Competition at ICAPS

As an outcome of the presentation of the RCLL at the workshop on Planning in Robotics at the International Conference on Automated Planning and Scheduling (ICAPS) in 2015 [14], a planning competition in simulation is being prepared. At ICAPS 2016, a tutorial was held to present the idea, gather feedback, and kickstart interested teams [3]. The particular challenges are to efficiently plan in short time with time-bounded dynamic orders and to provide an effective executive to execute multi-robot plans. Performing the competition in simulation provides a nice alignment with the RCLL. Options have been discussed to perform parts of this competition on real robots in the future. One possibility could be for the winning team of the ICAPS simulation competition to participate in the RCLL, or for the finals to be performed with real robots. The general idea is to foster collaboration and exchange among the planning and robotics communities. The first competition will be held at the ICAPS 2017 at Carnegie Mellon University. For the Carologistics, one benefit is the possibility to compare planning layers in the RCLL scenario which are currently developed. Further information is available at http://www.robocup-logistics.org/sim-comp.

6 League Advancements and Continued Involvement

We have been active members of the Technical, Organizational, and Executive Committees and proposed various ground-breaking changes for the league like merging the two playing fields and using physical processing machines in 2015 [4, 5]. Additionally we introduced and currently maintain the autonomous referee box for the competition [20] and develop the open simulation environment described above. We have also been a driving factor in the establishment of the RoboCup Industrial umbrella league [5]. It serves to coordinate and bring closer the efforts of industrially inspired RoboCup leagues. The first steps are the unification to a common referee box system (Sect. 6.1) and the introduction of a cross-over challenge (Sect. 6.4).

6.1 RCLL Referee Box and MPS Stations

The autonomous referee box (refbox) was introduced in 2013 by the Carologistics team [4]. With the goal to automate the playing field and ease the workload of referees the refbox acts as an agent, thereby creating the smart factory aspect of the RCLL scenario. It creates a randomized game layout and schedule and determines appropriate field reactions based on incoming MPS sensor data and robot communication using an extensible and flexible knowledge-based system. It provides a strong industrial grounding [20].

For the 2016 season the changes where moderate, e.g., a first approach for tracking workpieces through barcode scanners was developed, which will be used in 2017. More importantly, the RCLL refbox serves as the foundation for a common refbox for the new RoboCup Industrial (RC-I) umbrella league.Footnote 4 Based on a common framework the individual scenarios can be modeled, starting with the RCLL, RoboCup@Work, and the common cross-over challenge (cf. Section 6.4).

6.2 Public Release of Full Software Stack

Over the past ten years, we have developed the Fawkes Robot Software Framework [7] as a robust foundation to deal with the challenges of robotics applications in general, and in the context of RoboCup in particular. It has been developed and used in several leagues over the past few years [1] as visible in Fig. 8. Recently, the most active example is in the RoboCup Logistics League [19].

The Carologistics is the first team in the RCLL to publicly release their software stack. Teams in other leagues have made similar releases before. What makes ours unique is that it provides a complete and ready-to-run package with the full software (and some additions and fixes) that we used in the competition in 2015. This in particular includes the complete task-level executive component, that is the strategic decision making and behavior generating software. This component was typically held back or only released in small parts in previous software releases by other teams (for any league).

Fig. 8.
figure 8

Robots running (parts of) Fawkes which were or are used for the development of the framework and its components [1].

Fig. 9.
figure 9

Participants and the carologistics at the RCLL winter school in 2015.

6.3 RoboCup Logistics League Winter School

In December 2015, the Carologistics Team organized the week-long RoboCup Logistics League Winter School in Aachen (see Fig. 9). Within these days participants from Asia and Europe were introduced to the RoboCup Logistics League and the relevant components of the Fawkes software framework. The winter school was structured by theoretical sessions where members of the Carologistics Team presented topics like perception, navigation, simulation, and behavior design. Afterwards hands-on sessions with the Fawkes software framework deepened the theoretical sessions and were applied in simulation and in the real environment. This has been made possible through the generous support of Festo Didactic SE and a RoboCup Federation grant. Videos and further information is available at https://www.carologistics.org/winter-school-2015/.

6.4 RoboCup Industrial Cross-Over Challenge

As a first step for closer cooperation for the industry-inspired leagues under the RoboCup Industrial umbrella, together with stakeholders from the @Work league we have initiated a crossover challenge [21]. It describes several milestones towards closer cooperation. Within the challenge two teams from both leagues need to commonly work together to fulfill a requested order. During this challenge a human worker requests a product from an @work-robot which is then transmitted to the RCLL which handles the logistic part of the production. Afterwards the finished product is handed over to the @work league where it will be picked up and delivered to the human worker. In Fig. 10 the workflow of the cross-over scenario is depicted.

Fig. 10.
figure 10

Workflow of the cross-over scenario between @Work and the RCLL [21].

7 Conclusion

In 2016, we have further adapted to the new game. We upgraded our custom hardware gripper based on the feedback of the 2015 season, and further adapted and extended the behavior and functional components. We have also continued our contributions to the league as a whole through active participation in the league’s committees, publishing papers about the RCLL, and initiating a crossover challenge under the RoboCup Industrial umbrella. The development of the simulation we initiated has been transferred to a public project where other teams have joined the effort and it is used in a spin-off simulation competition. Most notably, however, we have released the complete software stack including all components and configurations as a ready-to-run package.

Due to changes in the the RCLL, we expect products to be identified and tracked by means of a barcode in the future. This allows to award points for intermediate production steps. We plan to integrate a detection component to be run on our robot based, e.g., the ZBarFootnote 5 and OpenCVFootnote 6 computer vision libraries. This would allow for detecting and overcoming world model inconsistencies.

The website of the Carologistics RoboCup Team with further information and media can be found at https://www.carologistics.org.