ABSTRACT
Robot formation control greatly relies on the accuracy and real-time performance associated with acquiring the information of the leader robot. The traditional communication and vision-based methods lead to a large delay and lack robustness to noise. In order to satisfy both requirements, we propose a cascaded multitask convolutional network to jointly address target detection and key point detection. In order to achieve high flexibility, we perform experiments using different model hyper parameters and explore the trade-off between accuracy and real-time performance. The experimental results demonstrate the effectiveness of our method for acquiring the information of the leader robot in real time with high accuracy. Furthermore, our method can be easily adapted to other vision-based tasks, laying foundation for the design of vision-based controllers for robots.
- Feddema, J. T., & Schoenwald, D. (2001). Decentralized control of cooperative robotic vehicles. Aerospace/defense Sensing, Simulation, & Controls. International Society for Optics and Photonics.Google Scholar
- Harikumar, K., Senthilnath, J., & Sundaram, S. (0). Multi-uav oxyrrhis marina-inspired search and dynamic formation control for forest firefighting. IEEE Transactions on Automation Science and Engineering, pp. (99), 1--11.Google Scholar
- Jennings, J. S., Whelan, G., & Evans, W. F (1997). Cooperative search and rescue with a team of mobile robots. International Conference on Advanced Robotics. IEEE.Google ScholarCross Ref
- Zhaohui, D., Min, W., & Xin, C. (2008). Multi-robot cooperative transportation using formation control. Control Conference. IEEE.Google ScholarCross Ref
- Xue, R., & Cai, G. (2016). Formation flight control of multi-uav system with communication constraints. Journal of Aerospace Technology & Management, 8(2), 203--210.Google ScholarCross Ref
- Wang, H., Guo, D., Liang, X., Chen, W., Hu, G., & Leang, K. K. (2017). Adaptive vision-based leader-follower formation control of mobile robots. IEEE Transactions on Industrial Electronics, (99), pp. 1--1.Google Scholar
- Das, A. K., Fierro, R., Kumar, V., Ostrowski, J. P., Spletzer, J., & Taylor, C. J. (2002). A vision-based formation control framework. IEEE Transactions on Robotics & Automation, 18(5), pp.813--825.Google ScholarCross Ref
- Zhang, L., Ahamed, T., Zhang, Y., Gao, P., & Takigawa, T. (2016). Vision-based leader vehicle trajectory tracking for multiple agricultural vehicles. Sensors, 16(4), 578.Google ScholarCross Ref
- Vidal, R., Shakernia, O., & Sastry, S. (2003). Formation control of nonholonomic mobile robots with omnidirectional visual servoing and motion segmentation. IEEE International Conference on Robotics & Automation.Google ScholarCross Ref
- Poonawala, H., Satici, A. C., Gans, N., & Spong, M. W. (2012). Formation control of wheeled robots with vision-based position measurement., 50(6), pp. 3173--3178.Google Scholar
- Fidan, B., Gazi, V., Zhai, S., Cen, N., & Karatas, E. (2013). Single-view distance-estimation-based formation control of robotic swarms. IEEE Transactions on Industrial Electronics, 60(12), pp. 5781--5791.Google ScholarCross Ref
- Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems (pp. 91--99). Google ScholarDigital Library
- Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21--37). Springer, Cham.Google Scholar
- Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 779--788).Google ScholarCross Ref
- Zhang, K., Zhang, Z., Li, Z., & Qiao, Y. (2016). Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Processing Letters, 23(10), 1499--1503.Google ScholarCross Ref
- Li, H., Lin, Z., Shen, X., Brandt, J., & Hua, G (2015). A convolutional neural network cascade for face detection. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society.Google ScholarCross Ref
- Caruana, R. A. (1993). Multitask learning: a knowledge-based source of inductive bias. Machine Learning Proceedings, 10(1), 41--48. Google ScholarDigital Library
- Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., & Weyand, T., et al. (2017). Mobilenets: efficient convolutional neural networks for mobile vision application.Google Scholar
Index Terms
- Cascaded Multitask Convolutional Network for Robot Formation
Recommendations
The impact of communication and terrain characteristics on the accuracy of robot formation
ACWR '11: Proceedings of the 1st International Conference on Wireless Technologies for Humanitarian ReliefRobot formation is one of the significant research directions in humanitarian, tactical, as well as space missions. For a task that requires multiple autonomous robots to work together as a team, coordinating positions each with respect to the other is ...
Virtual fields and behaviour blending for the coordinated navigation of robot teams
A new method to guide the navigation of robot teams has been developed.The method is based on virtual fields and behaviours blending.The method has been consistently tested both in simulations and experiments.Achieved results that show the reliability ...
Evolutionary swarm formation: From simulations to real world robots
AbstractSwarms of autonomous robots have become an interesting alternative for space and aerospace applications due to their versatility, robustness, and self-organising capability. Some of those applications, such as asteroid observation, convoy escort, ...
Highlights- Distributed formation algorithm to arrange robots around an object of interest.
- Evolutionary algorithm to optimise the robot swarm parameters.
- Simulation of robot swarms comprising up to 30 E-Puck2 robots.
- Real world validation ...
Comments