skip to main content
10.1145/3478586.3478639acmotherconferencesArticle/Chapter ViewAbstractPublication PagesairConference Proceedingsconference-collections
research-article

Learning-based Approach for Estimation of Axis of Rotation for Markerless Visual Servoing to Tumbling Object

Published:28 December 2021Publication History

ABSTRACT

The increased satellite launches have made the capture of debris and On-Orbit servicing of the orbiting satellites essential. In space, objects exhibit a tumbling motion around their major inertial axis. In this paper, we propose a featureless approach for a robotic system to visual servo control in case of an uncooperative tumbling object. In contrast to the previously studied approaches that require a 3D CAD model of the object or its reconstruction, we propose a novel solution that also forgoes the need for special markers. For this purpose, we leverage a deep convolutional neural network technique to automatically estimate the axis of rotation vector of a tumbling object from its video and motion characteristics. Position-Based Visual Servoing algorithm can then use the extracted data for control. The effectiveness of the proposed framework is exhibited by implementing simulation in V-Rep on the Reachy Robotic arm.

References

  1. [1]E. Stoll et al., ”On-orbit servicing,” in IEEE Robotics & Automation Magazine, vol. 16, no. 4, pp. 29-33, December 2009, doi: 10.1109/MRA.2009.934819.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2]M. Shan, J. Guo and E. Gill, ”Review and comparison of active space debris capturing and removal methods”, Progress in Aerospace Sciences, vol. 80, pp. 18-32, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3]F. Sugai, S. Abiko, T. Tsujita, X. Jiang and M. Uchiyama, ”Detumbling an uncontrolled satellite with contactless force by using an eddy current brake,” 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, 2013, pp. 783-788, doi: 10.1109/IROS.2013.6696440.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4]H. Nagamatsu, T. Kubota and I. Nakatani, ”Capture strategy for retrieval of a tumbling satellite by a space robotic manipulator,” Proceedings of IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 1996, pp. 70-75 vol.1, doi: 10.1109/ROBOT.1996.503575.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5]A. Petit, E. Marchand and K. Kanani, ”Vision-based space autonomous rendezvous: A case study,” 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, 2011, pp. 619-624, doi: 10.1109/IROS.2011.6094568.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6]E. Wengrowski, M. Purri, K. Dana and A. Huston, ”Deep CNNs as a method to classify rotating objects based on monostatic RCS,” in IET Radar, Sonar & Navigation, vol. 13, no. 7, pp. 1092-1100, 7 2019, doi: 10.1049/iet-rsn.2018.5453.Google ScholarGoogle ScholarCross RefCross Ref
  7. [7]T. J. Broida and R. Chellappa, ”Estimating the kinematics and structure of a rigid object from a sequence of monocular images,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 6, pp. 497-513, June 1991, doi: 10.1109/34.87338.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] G. -. J. Young and R. Chellappa, ”3-D motion estimation using a sequence of noisy stereo images: models, estimation, and uniqueness results,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 8, pp. 735-759, Aug. 1990, doi: 10.1109/34.57666.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] F. Aghili, M. Kuryllo, G. Okouneva and C. English, ”Robust vision-based pose estimation of moving objects for Automated Rendezvous & Docking,” 2010 IEEE International Conference on Mechatronics and Automation, Xi’an, 2010, pp. 305-311, doi: 10.1109/ICMA.2010.5589051.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] A. Petit, Robust visual detection and tracking of complex objects: applications to space autonomous rendezvous and proximity operations, pp. 1, 2013.Google ScholarGoogle Scholar
  11. [11] B.P. Larouche and Z.H. Zhu, ”Autonomous robotic capture of noncooperative target using visual servoing and motion predictive control”, Autonomous Robots, vol. 37, no. 2, pp. 157-167, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. [12] I. Rekleitis, E. Martin, G. Rouleau, R. L’Archevêque, K. Parsa and E. Dupuis, ”Autonomous capture of a tumbling satellite”, Journal of Field Robotics, vol. 24, no. 4, pp. 275-296, 2007.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. [13] Sungwook CHO, Sungsik HUH, David Hyunchul SHIM, Visual Detection and Servoing for Automated Docking of Unmanned Spacecraft, TRANSACTIONS OF THE JAPAN SOCIETY FOR AERONAUTICAL AND SPACE SCIENCES, AEROSPACE TECHNOLOGY JAPAN, 2014, Volume 12, Issue APISAT-2013, Pages a107-a116, Released June 13, 2015, Online ISSN 1884-0485, https://doi.org/10.2322/tastj.12.a107Google ScholarGoogle Scholar
  14. [14] M. Jin, G. Yang, Y. Liu, X. Zhao and H. Liu, ”A Motion Planning Method Based Vision Servo for Free-Flying Space Robot Capturing a Tumbling Satellite,” 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Tianjin, China, 2018, pp. 883-888, doi: 10.1109/CYBER.2018.8688283.Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] G. Yang, Y. Liu, M. Jin and H. Liu, ”A Robust and Adaptive Control Method for Flexible-Joint Manipulator Capturing a Tumbling Satellite,” in IEEE Access, vol. 7, pp. 159971-159985, 2019, doi: 10.1109/ACCESS.2019.2950674.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] N. Inaba, and M. Oda, ”Autonomous satellite capture by a space robot,” Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), San Fransisco, vol. 2, pp. 1169-1174, 2000Google ScholarGoogle Scholar
  17. [17] S. Nakasuka, and T. Fujiwara, ”New method of capturing tumbling object in space and its control aspects,” Proc. IEEE Int. Conf. on Control Applications, Kohala Coast, Hawaii, August 22-27, pp. 973-978, 1999.Google ScholarGoogle ScholarCross RefCross Ref
  18. [18]G. Jaar, X. Cyril, and A. K. Misra, ”Dynamic modeling and control of a spacecraft-mounted manipulator capturing a spinning satellite,” 43 Congress of the Int. Astwnautic Federation, IAF-92-0029, 1992.Google ScholarGoogle Scholar
  19. [19] E. Papadopoulos, and S. A. A. Moosavian, ”Dynamics & Control of Multi-arm Space Robots During Chase & Capture Operations,” Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Munich, Germany, pp. 1554-1561, September 12-16, 1994.Google ScholarGoogle Scholar
  20. [20]K. Yamada, ”Arm path planning for a space robot,” Proc. of IEEE/RSJ Int. Conf on Intelligent Robots and Systems (IROS), Yokohama, Japan, pp. 2049-2055, 1993.Google ScholarGoogle Scholar
  21. [21]K. Yamada, S. Yoshikawa, and Y. Fujita, ”Arm path planning of a space robot with angular momentum,” Journal of Advanced Robotics, vol 9, no. 6, pp. 693-709, 1995.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22]Horn, Berthold K.P.; Schunck, Brian G. (August 1981). ”Determining optical flow” (PDF). Artificial Intelligence. 17 (1–3): 185–203. doi:10.1016/0004-3702(81)90024-2. hdl:1721.1/6337.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. [23]C. Liu. Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. Doctoral Thesis. Massachusetts Institute of Technology. May 2009.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24]T. Brox, A. Bruhn, N. Papenberg, and J.Weickert. High accuracy optical flow estimation based on a theory for warping. In European Conference on Computer Vision (ECCV), pages 25–36, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  25. [25]A. Bruhn, J.Weickert and C. Schn¨orr. Lucas/Kanade meets Horn/Schunk: combining local and global optical flow methods. International Journal of Computer Vision (IJCV), 61(3):211–231, 2005.Google ScholarGoogle Scholar
  26. [26] https://github.com/pathak22/pyflowGoogle ScholarGoogle Scholar
  27. [27] Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and AndrewZisserman. 2020. Counting Out Time: Class Agnostic Video Repetition Countingin the Wild. InProceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition. 10387–10396.Google ScholarGoogle Scholar
  28. [28] Giorgos Karvounas, Iason Oikonomidis, and Antonis Argyros. 2019. ReActNet:Temporal Localization of Repetitive Activities in Real-World Videos.arXiv preprintarXiv:1910.06096(2019).Google ScholarGoogle Scholar
  29. [29] Ofir Levy and Lior Wolf. 2015. Live repetition counting. InProceedings of the IEEEinternational conference on computer vision. 3020–3028.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30]Tom FH Runia, Cees GM Snoek, and Arnold WM Smeulders. 2018. Real-worldrepetition estimation by div, grad and curl. InProceedings of the IEEE conferenceon computer vision and pattern recognition. 9009–9017.Google ScholarGoogle Scholar
  31. [31] Hao-Yu Wu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand, andWilliam Freeman. 2012. Eulerian video magnification for revealing subtle changesin the world.ACM transactions on graphics (TOG)31, 4 (2012), 1–8Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    AIR '21: Proceedings of the 2021 5th International Conference on Advances in Robotics
    June 2021
    348 pages
    ISBN:9781450389716
    DOI:10.1145/3478586

    Copyright © 2021 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 28 December 2021

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate69of140submissions,49%
  • Article Metrics

    • Downloads (Last 12 months)18
    • Downloads (Last 6 weeks)0

    Other Metrics

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format