Abstract
This paper investigates the potential of deep learning methods to detect and segment objects from vision sensors mounted on autonomous robots to support task allocation in unmanned systems. An object instance segmentation framework, Mask R-CNN, is experimentally evaluated and compared with previous architecture, Faster R-CNN. The former model adds an object mask prediction branch in parallel with the existing branches for target objects location and class recognition, which represents a significant benefit for autonomous robots navigation. A comparison of performance between the two architectures is carried over scenes of varying complexity. While both networks perform well on recognition and bounding box estimation, experimental results show that Mask R-CNN generally outperforms Faster R-CNN, particularly because of the accurate mask prediction generated by this network. These results support well the requirements imposed by an automated task allocation mechanism for a group of unmanned vehicles.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)
Korte, B., Vygen, J.: Combinatorial Optimization: Theory and Algorithms. Algorithms and Combinatorics. Springer, Heidelberg (2008). https://doi.org/10.1007/3-540-29297-7
Hall, P.: On representatives of subsets. In: Gessel, I., Rota, G.C. (eds.) Classic Papers in Combinatorics. Modern Birkhäuser Classics, pp. 58–62. Birkhäuser, Boston (2009). https://doi.org/10.1007/978-0-8176-4842-8_4
Smith, S.L., Bullo, F.: Target assignment for robotic networks: asymptotic performance under limited communication. In: Proceedings American Control Conference, New York, NY, pp. 1155–1160 (2007)
Jones, C., Mataric, M.J.: Adaptive division of labor in large-scale minimalist multi-robot systems. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, vol. 2, pp. 1969–1974 (2003)
Lang, T., Toussaint, M.: Relevance grounding for planning in relational domains. In: Buntine, W., Grobelnik, M., Mladenić, D., Shawe-Taylor, J. (eds.) ECML PKDD 2009. LNCS (LNAI), vol. 5781, pp. 736–751. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04180-8_65
Toussaint, M., Plath, N., Lang, T., Jetchev, N.: Integrated motor control, planning, grasping and high-level reasoning in a blocks world using probabilistic inference. In: Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, pp. 385–391 (2010)
Wu, H., Li, H., Xiao, R., Liu, J.: Modeling and simulation of dynamic ant colony’s labor division for task allocation of UAV swarm. Phys. A 491, 127–141 (2018)
Al-Buraiki, O., Payeur, P.: Agent-Task assignation based on target characteristics for a swarm of specialized agents. In: 13th Annual IEEE International Systems Conference, Orlando, FL, pp. 268–275, April 2019
Ren, S., He, K., Girshik, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 1, pp. 91–99, December 2015
He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN, In: Proceedings of the IEEE International Conference on Computer Vision (2017)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, June 2015
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Lin, T.-Y., Dollar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 936–944 (2017)
Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes (VOC) challenge. Int. J. Comput. Vision 88(2), 303–338 (2010)
Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48
Acknowledgements
The authors wish to acknowledge the support from Department of National Defence of Canada toward this research under the Innovation for Defence Excellence and Security (IDEaS) program.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Wu, W., Payeur, P., Al-Buraiki, O., Ross, M. (2019). Vision-Based Target Objects Recognition and Segmentation for Unmanned Systems Task Allocation. In: Karray, F., Campilho, A., Yu, A. (eds) Image Analysis and Recognition. ICIAR 2019. Lecture Notes in Computer Science(), vol 11662. Springer, Cham. https://doi.org/10.1007/978-3-030-27202-9_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-27202-9_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-27201-2
Online ISBN: 978-3-030-27202-9
eBook Packages: Computer ScienceComputer Science (R0)