Skip to main content
Log in

RoboCloud: augmenting robotic visions for open environment modeling using Internet knowledge

  • Research Paper
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

Modeling an open environment that contains unpredictable objects is a challenging problem in the field of robotics. In traditional approaches, when a robot encounters an unknown object, a mistake will inevitably be added to the robot’s environmental model, severely constraining the robot’s autonomy, and possibly leading to disastrous consequences in certain settings. The abundant knowledge accumulated on the Internet has the potential to remedy the uncertainties that result from encountering with unknown objects. However, robotic applications generally pay considerable attention to quality of service (QoS). For this reason, directly accessing the Internet, which can be unpredictable, is generally not acceptable. RoboCloud is proposed as a novel approach to environment modeling that takes advantage of the Internet without sacrificing the critical properties of QoS. RoboCloud is a “mission cloud–public cloud” layered cloud organization model in which the mission cloud provides QoS-available environment modeling capability with built-in prior knowledge while the public cloud is the existing services provided by the Internet. The “cloud phase transition” mechanism seeks help from the public cloud only when a request is outside the knowledge of the mission cloud and the QoS cost is acceptable. We have adopted semantic mapping, a typical robotic environment modeling task, to illustrate and substantiate our approach and key mechanism. Experiments using open 2D and 3D datasets with real robots have demonstrated that RoboCloud is able to augment robotic visions for open environment modeling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Nüchter A, Hertzberg J. Towards semantic maps for mobile robots. Robot Auton Syst, 2008, 56: 915–926

    Article  Google Scholar 

  2. Wolf D F, Sukhatme G S. Semantic mapping using mobile robots. IEEE Trans Robot, 2008, 24: 245–258

    Article  Google Scholar 

  3. Fulcher J. Computational Intelligence: An Introduction. Berlin: Springer, 2008

    Google Scholar 

  4. Kehoe B, Patil S, Abbeel P, et al. A survey of research on cloud robotics and automation. IEEE Trans Autom Sci Eng, 2015, 12: 398–409

    Article  Google Scholar 

  5. Riazuelo L, Tenorth M, Marco D D, et al. RoboEarth semantic mapping: a cloud enabled knowledge-based approach. IEEE Trans Autom Sci Eng, 2015, 12: 432–443

    Article  Google Scholar 

  6. Satyanarayanan M, Bahl P, Caceres R, et al. The case for VM-based cloudlets in mobile computing. IEEE Pervas Comput, 2009, 8: 14–23

    Article  Google Scholar 

  7. Furrer J, Kamei K, Sharma C, et al. Unr-pf: an open-source platform for cloud networked robotic services. In: Proceedings of IEEE/SICE International Symposium on System Integration, Fukuoka, 2012

    Google Scholar 

  8. Kostavelis I, Gasteratos A. Semantic mapping for mobile robotics tasks: a survey. Robot Auton Syst, 2015, 66: 86–103

    Article  Google Scholar 

  9. Durrant-Whyte H, Bailey T. Simultaneous localization and mapping: part I. IEEE Robot Autom Mag, 2006, 13: 99–110

    Article  Google Scholar 

  10. Mohanarajah G, Hunziker D, D’Andrea R, et al. Rapyuta: a cloud robotics platform. IEEE Trans Autom Sci Eng, 2015, 12: 481–493

    Article  Google Scholar 

  11. Ashutosh S, Ashesh J, Ozan S, et al. Robobrain: large-scale knowledge engine for robots. 2014. ArXiv:1412.0691

    Google Scholar 

  12. Qureshi B, Javed Y, Koubâa A, et al. Performance of a low cost Hadoop cluster for image analysis in cloud robotics environment. Procedia Comput Sci, 2016, 82: 90–98

    Article  Google Scholar 

  13. Ren S Q, He K M, Girshick R, et al. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intel, 2017, 39: 1137–1149

    Article  Google Scholar 

  14. Beksi W J, Spruth J, Papanikolopoulos N. Core: a cloud-based object recognition engine for robotics. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, 2015

    Google Scholar 

  15. Szegedy C, Toshev A, Erhan D. Deep neural networks for object detection. Adv Neural Inf Process Syst, 2013, 26: 2553–2561

    Google Scholar 

  16. Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation. 2014. ArXiv:1311.2524

    Book  Google Scholar 

  17. Girshick R. Fast R-CNN. 2015. ArXiv:1504.08083

    Book  Google Scholar 

  18. Everingham M, Van Gool L, Williams C K I, et al. The pascal visual object classes (VOC) challenge. Int J Comput Vis, 2010, 88: 303–338

    Article  Google Scholar 

  19. Torralba A, Murphy K P, Freeman W T. Shared features for multiclass object detection. In: Toward Category-Level Object Recognition. Berlin: Springer, 2006. 345–361

    Chapter  Google Scholar 

  20. Li Y Y, Wang H M, Ding B, et al. Learning from internet: handling uncertainty in robotic environment modeling. In: Proceedings of the 9th Asia-Pacific Symposium on Internetware, Shanghai, 2017

    Google Scholar 

  21. Duan K B, Keerthi S. Which is the best multiclass svm method? an empirical study. In: Proceedings of International Workshop on Multiple Classifier Systems, Seaside, 2005. 278–285

    Chapter  Google Scholar 

  22. Li Y Y, Wang H M, Ding B, et al. Toward qos-aware cloud robotic applications: a hybrid architecture and its implementation. In: Proceedings of IEEE Conferences on Ubiquitous Intelligence and Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress, Toulouse, 2017

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by National Natural Science Foundation of China (Grant Nos. 91118008, 61202117, 61772030), Special Program for the Applied Basic Research of National University of Defense Technology (Grant No. ZDYYJCYJ20140601), and Jiangsu Future Networks Innovation Institute Prospective Research Project on Future Networks (Grant No. BY2013095-2-08).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huaimin Wang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Wang, H., Ding, B. et al. RoboCloud: augmenting robotic visions for open environment modeling using Internet knowledge. Sci. China Inf. Sci. 61, 050102 (2018). https://doi.org/10.1007/s11432-017-9380-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-017-9380-5

Keywords

Navigation