Skip to main content
Log in

A Minimal Dataset Construction Method Based on Similar Training for Capture Position Recognition of Space Robot

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

Recognizing capture position for non-cooperative targets is an important component of on-orbit service. Traditional machine learning works could not satisfy the requirements of space mission, which demands universality, accuracy and real-time performance. To meet those requirements, an innovative job based on deep learning called Faster Region-based Convolutional Neural Network (Faster RCNN) is introduced for space robot capture position recognizing. Based on the principle of similar training, a minimal dataset construction trick is proposed in order to solve the problem of fewer training samples in space environment. Firstly, the Deep Neural Network is pre-trained through ImageNet training set. Then, using the trained weights as the initial weight of the network, the network is fine-tuned by 1000 training samples in space environment. Finally, a simulation experiment is designed, and the experimental results indicate that the similar training principle can solve the problem of capture position recognition of non-cooperative targets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Flores-Abad, A., Ma, O., Pham, K., et al. (2014). A review of space robotics technologies for on-orbit servicing. Progress in Aerospace Sciences, 68(8), 1–26.

    Article  Google Scholar 

  2. Huang, P., Zhang, F., Cai, J., et al. (2017). Dexterous tethered space robot: Design, measurement, control, and experiment. IEEE Transactions on Aerospace and Electronic Systems, 53(3), 1452–1468.

    Article  Google Scholar 

  3. Yu, Z. W., Liu, X. F., & Cai, G. P. (2016). Dynamics modeling and control of a 6-DOF space robot with flexible panels for capturing a free floating target. Acta Astronautica, 128, 560–572.

    Article  Google Scholar 

  4. Chen, L., Huang, P., Cai, J., et al. (2016). A non-cooperative target grasping position prediction model for tethered space robot. Aerospace Science and Technology, 58, 571–581.

    Article  Google Scholar 

  5. Dong, G., & Zhu, Z. H. (2016). Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris. Advances in Space Research, 57(7), 1508–1514.

    Article  Google Scholar 

  6. Sabatini, M., Monti, R., Gasbarri, P., et al. (2013). Adaptive and robust algorithms and tests for visual-based navigation of a space robotic manipulator. Acta Astronautica, 83, 65–84.

    Article  Google Scholar 

  7. Xu, W., Liang, B., & Xu, C. L. Y. (2010). Autonomous rendezvous and robotic capturing of non-cooperative target in space. Robotica, 28(5), 705–718.

    Article  Google Scholar 

  8. Miao, X., Zhu, F., & Hao, Y. (2011). Pose estimation of non-cooperative spacecraft based on collaboration of space-ground and rectangle feature. Proceedings of SPIE—The International Society for Optical Engineering, 8196(3), 1070–1075.

    Google Scholar 

  9. Miao, X., & Zhu, F. (2013). Monocular vision pose measurement based on docking ring component. Acta Optica Sinica, 33(4), 0412006.

    Article  Google Scholar 

  10. Peng, X., Sun, B., & Ali, K. et al. (2016). Learning deep object detectors from 3D models. In IEEE international conference on computer vision (pp. 1278–1286). IEEE.

  11. Peng, X., Sun, B., & Ali, K. et al. (2014). Exploring invariances in deep convolutional neural networks using synthetic images. Eprint Arxiv, pp. 1278–1286.

  12. Su, H., Qi, C. R., & Li, Y. et al. (2016). Render for CNN: Viewpoint estimation in images using CNNs trained with rendered 3D model views. In IEEE international conference on computer vision (pp. 2686–2694). IEEE.

  13. Alfriend, K. T., Lee, D. J., & Creamer, N. G. (2012). Optimal servicing of geosynchronous satellites. Journal of Guidance, Control and Dynamics, 29(1), 203–206.

    Article  Google Scholar 

  14. Kamon, I., Flash, T., & Edelman, S. (1994). Learning to grasp using visual information. Rehovot: Weizmann Science Press of Israel.

    Google Scholar 

  15. Morales, A., Chinellato, E., Fagg, A. H., et al. (2004). Using experience for assessing grasp reliability. International Journal of Humanoid Robotics, 1(04), 671–691.

    Article  Google Scholar 

  16. El-Khoury, S., & Sahbani, A. (2008). Handling objects by their handles. ZWR, 85(3), 130–132.

    Google Scholar 

  17. Lenz, I., Lee, H., & Saxena, A. (2013). Deep learning for detecting robotic grasps. International Journal of Robotics Research, 34(4–5), 705–724.

    Google Scholar 

  18. Jiang, H., & Learned-Miller, E. (2017). Face detection with the faster R-CNN. In IEEE international conference on automatic face and gesture recognition (pp. 650–657). IEEE.

  19. Zhao, X., Li, W., & Zhang, Y. et al. (2017). A faster RCNN-based pedestrian detection system. In Vehicular technology conference (pp. 1–5). IEEE.

  20. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In International conference on neural information processing systems (pp. 1097–1105). Curran Associates Inc.

  21. Karpathy, A., Toderici, G., & Shetty, S. et al. (2014). Large-scale video classification with convolutional neural networks. In IEEE conference on computer vision and pattern recognition (pp. 1725–1732). IEEE Computer Society.

  22. Levi, G., Hassncer, T. (2015). Age and gender classification using convolutional neural networks. In Computer vision and pattern recognition workshops (pp. 34–42). IEEE.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaodong Hu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, X., Huang, X., Hu, T. et al. A Minimal Dataset Construction Method Based on Similar Training for Capture Position Recognition of Space Robot. Wireless Pers Commun 102, 1935–1948 (2018). https://doi.org/10.1007/s11277-018-5247-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-018-5247-y

Keywords

Navigation