Abstract
Recognizing capture position for non-cooperative targets is an important component of on-orbit service. Traditional machine learning works could not satisfy the requirements of space mission, which demands universality, accuracy and real-time performance. To meet those requirements, an innovative job based on deep learning called Faster Region-based Convolutional Neural Network (Faster RCNN) is introduced for space robot capture position recognizing. Based on the principle of similar training, a minimal dataset construction trick is proposed in order to solve the problem of fewer training samples in space environment. Firstly, the Deep Neural Network is pre-trained through ImageNet training set. Then, using the trained weights as the initial weight of the network, the network is fine-tuned by 1000 training samples in space environment. Finally, a simulation experiment is designed, and the experimental results indicate that the similar training principle can solve the problem of capture position recognition of non-cooperative targets.
Similar content being viewed by others
References
Flores-Abad, A., Ma, O., Pham, K., et al. (2014). A review of space robotics technologies for on-orbit servicing. Progress in Aerospace Sciences, 68(8), 1–26.
Huang, P., Zhang, F., Cai, J., et al. (2017). Dexterous tethered space robot: Design, measurement, control, and experiment. IEEE Transactions on Aerospace and Electronic Systems, 53(3), 1452–1468.
Yu, Z. W., Liu, X. F., & Cai, G. P. (2016). Dynamics modeling and control of a 6-DOF space robot with flexible panels for capturing a free floating target. Acta Astronautica, 128, 560–572.
Chen, L., Huang, P., Cai, J., et al. (2016). A non-cooperative target grasping position prediction model for tethered space robot. Aerospace Science and Technology, 58, 571–581.
Dong, G., & Zhu, Z. H. (2016). Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris. Advances in Space Research, 57(7), 1508–1514.
Sabatini, M., Monti, R., Gasbarri, P., et al. (2013). Adaptive and robust algorithms and tests for visual-based navigation of a space robotic manipulator. Acta Astronautica, 83, 65–84.
Xu, W., Liang, B., & Xu, C. L. Y. (2010). Autonomous rendezvous and robotic capturing of non-cooperative target in space. Robotica, 28(5), 705–718.
Miao, X., Zhu, F., & Hao, Y. (2011). Pose estimation of non-cooperative spacecraft based on collaboration of space-ground and rectangle feature. Proceedings of SPIE—The International Society for Optical Engineering, 8196(3), 1070–1075.
Miao, X., & Zhu, F. (2013). Monocular vision pose measurement based on docking ring component. Acta Optica Sinica, 33(4), 0412006.
Peng, X., Sun, B., & Ali, K. et al. (2016). Learning deep object detectors from 3D models. In IEEE international conference on computer vision (pp. 1278–1286). IEEE.
Peng, X., Sun, B., & Ali, K. et al. (2014). Exploring invariances in deep convolutional neural networks using synthetic images. Eprint Arxiv, pp. 1278–1286.
Su, H., Qi, C. R., & Li, Y. et al. (2016). Render for CNN: Viewpoint estimation in images using CNNs trained with rendered 3D model views. In IEEE international conference on computer vision (pp. 2686–2694). IEEE.
Alfriend, K. T., Lee, D. J., & Creamer, N. G. (2012). Optimal servicing of geosynchronous satellites. Journal of Guidance, Control and Dynamics, 29(1), 203–206.
Kamon, I., Flash, T., & Edelman, S. (1994). Learning to grasp using visual information. Rehovot: Weizmann Science Press of Israel.
Morales, A., Chinellato, E., Fagg, A. H., et al. (2004). Using experience for assessing grasp reliability. International Journal of Humanoid Robotics, 1(04), 671–691.
El-Khoury, S., & Sahbani, A. (2008). Handling objects by their handles. ZWR, 85(3), 130–132.
Lenz, I., Lee, H., & Saxena, A. (2013). Deep learning for detecting robotic grasps. International Journal of Robotics Research, 34(4–5), 705–724.
Jiang, H., & Learned-Miller, E. (2017). Face detection with the faster R-CNN. In IEEE international conference on automatic face and gesture recognition (pp. 650–657). IEEE.
Zhao, X., Li, W., & Zhang, Y. et al. (2017). A faster RCNN-based pedestrian detection system. In Vehicular technology conference (pp. 1–5). IEEE.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In International conference on neural information processing systems (pp. 1097–1105). Curran Associates Inc.
Karpathy, A., Toderici, G., & Shetty, S. et al. (2014). Large-scale video classification with convolutional neural networks. In IEEE conference on computer vision and pattern recognition (pp. 1725–1732). IEEE Computer Society.
Levi, G., Hassncer, T. (2015). Age and gender classification using convolutional neural networks. In Computer vision and pattern recognition workshops (pp. 34–42). IEEE.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Hu, X., Huang, X., Hu, T. et al. A Minimal Dataset Construction Method Based on Similar Training for Capture Position Recognition of Space Robot. Wireless Pers Commun 102, 1935–1948 (2018). https://doi.org/10.1007/s11277-018-5247-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11277-018-5247-y