ABSTRACT
The increased satellite launches have made the capture of debris and On-Orbit servicing of the orbiting satellites essential. In space, objects exhibit a tumbling motion around their major inertial axis. In this paper, we propose a featureless approach for a robotic system to visual servo control in case of an uncooperative tumbling object. In contrast to the previously studied approaches that require a 3D CAD model of the object or its reconstruction, we propose a novel solution that also forgoes the need for special markers. For this purpose, we leverage a deep convolutional neural network technique to automatically estimate the axis of rotation vector of a tumbling object from its video and motion characteristics. Position-Based Visual Servoing algorithm can then use the extracted data for control. The effectiveness of the proposed framework is exhibited by implementing simulation in V-Rep on the Reachy Robotic arm.
- [1]E. Stoll et al., ”On-orbit servicing,” in IEEE Robotics & Automation Magazine, vol. 16, no. 4, pp. 29-33, December 2009, doi: 10.1109/MRA.2009.934819.Google ScholarCross Ref
- [2]M. Shan, J. Guo and E. Gill, ”Review and comparison of active space debris capturing and removal methods”, Progress in Aerospace Sciences, vol. 80, pp. 18-32, 2016.Google ScholarCross Ref
- [3]F. Sugai, S. Abiko, T. Tsujita, X. Jiang and M. Uchiyama, ”Detumbling an uncontrolled satellite with contactless force by using an eddy current brake,” 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, 2013, pp. 783-788, doi: 10.1109/IROS.2013.6696440.Google ScholarCross Ref
- [4]H. Nagamatsu, T. Kubota and I. Nakatani, ”Capture strategy for retrieval of a tumbling satellite by a space robotic manipulator,” Proceedings of IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 1996, pp. 70-75 vol.1, doi: 10.1109/ROBOT.1996.503575.Google ScholarCross Ref
- [5]A. Petit, E. Marchand and K. Kanani, ”Vision-based space autonomous rendezvous: A case study,” 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, 2011, pp. 619-624, doi: 10.1109/IROS.2011.6094568.Google ScholarCross Ref
- [6]E. Wengrowski, M. Purri, K. Dana and A. Huston, ”Deep CNNs as a method to classify rotating objects based on monostatic RCS,” in IET Radar, Sonar & Navigation, vol. 13, no. 7, pp. 1092-1100, 7 2019, doi: 10.1049/iet-rsn.2018.5453.Google ScholarCross Ref
- [7]T. J. Broida and R. Chellappa, ”Estimating the kinematics and structure of a rigid object from a sequence of monocular images,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 6, pp. 497-513, June 1991, doi: 10.1109/34.87338.Google ScholarDigital Library
- [8] G. -. J. Young and R. Chellappa, ”3-D motion estimation using a sequence of noisy stereo images: models, estimation, and uniqueness results,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 8, pp. 735-759, Aug. 1990, doi: 10.1109/34.57666.Google ScholarDigital Library
- [9] F. Aghili, M. Kuryllo, G. Okouneva and C. English, ”Robust vision-based pose estimation of moving objects for Automated Rendezvous & Docking,” 2010 IEEE International Conference on Mechatronics and Automation, Xi’an, 2010, pp. 305-311, doi: 10.1109/ICMA.2010.5589051.Google ScholarCross Ref
- [10] A. Petit, Robust visual detection and tracking of complex objects: applications to space autonomous rendezvous and proximity operations, pp. 1, 2013.Google Scholar
- [11] B.P. Larouche and Z.H. Zhu, ”Autonomous robotic capture of noncooperative target using visual servoing and motion predictive control”, Autonomous Robots, vol. 37, no. 2, pp. 157-167, 2014.Google ScholarDigital Library
- [12] I. Rekleitis, E. Martin, G. Rouleau, R. L’Archevêque, K. Parsa and E. Dupuis, ”Autonomous capture of a tumbling satellite”, Journal of Field Robotics, vol. 24, no. 4, pp. 275-296, 2007.Google ScholarDigital Library
- [13] Sungwook CHO, Sungsik HUH, David Hyunchul SHIM, Visual Detection and Servoing for Automated Docking of Unmanned Spacecraft, TRANSACTIONS OF THE JAPAN SOCIETY FOR AERONAUTICAL AND SPACE SCIENCES, AEROSPACE TECHNOLOGY JAPAN, 2014, Volume 12, Issue APISAT-2013, Pages a107-a116, Released June 13, 2015, Online ISSN 1884-0485, https://doi.org/10.2322/tastj.12.a107Google Scholar
- [14] M. Jin, G. Yang, Y. Liu, X. Zhao and H. Liu, ”A Motion Planning Method Based Vision Servo for Free-Flying Space Robot Capturing a Tumbling Satellite,” 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Tianjin, China, 2018, pp. 883-888, doi: 10.1109/CYBER.2018.8688283.Google ScholarCross Ref
- [15] G. Yang, Y. Liu, M. Jin and H. Liu, ”A Robust and Adaptive Control Method for Flexible-Joint Manipulator Capturing a Tumbling Satellite,” in IEEE Access, vol. 7, pp. 159971-159985, 2019, doi: 10.1109/ACCESS.2019.2950674.Google ScholarCross Ref
- [16] N. Inaba, and M. Oda, ”Autonomous satellite capture by a space robot,” Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), San Fransisco, vol. 2, pp. 1169-1174, 2000Google Scholar
- [17] S. Nakasuka, and T. Fujiwara, ”New method of capturing tumbling object in space and its control aspects,” Proc. IEEE Int. Conf. on Control Applications, Kohala Coast, Hawaii, August 22-27, pp. 973-978, 1999.Google ScholarCross Ref
- [18]G. Jaar, X. Cyril, and A. K. Misra, ”Dynamic modeling and control of a spacecraft-mounted manipulator capturing a spinning satellite,” 43 Congress of the Int. Astwnautic Federation, IAF-92-0029, 1992.Google Scholar
- [19] E. Papadopoulos, and S. A. A. Moosavian, ”Dynamics & Control of Multi-arm Space Robots During Chase & Capture Operations,” Proc. of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Munich, Germany, pp. 1554-1561, September 12-16, 1994.Google Scholar
- [20]K. Yamada, ”Arm path planning for a space robot,” Proc. of IEEE/RSJ Int. Conf on Intelligent Robots and Systems (IROS), Yokohama, Japan, pp. 2049-2055, 1993.Google Scholar
- [21]K. Yamada, S. Yoshikawa, and Y. Fujita, ”Arm path planning of a space robot with angular momentum,” Journal of Advanced Robotics, vol 9, no. 6, pp. 693-709, 1995.Google ScholarCross Ref
- [22]Horn, Berthold K.P.; Schunck, Brian G. (August 1981). ”Determining optical flow” (PDF). Artificial Intelligence. 17 (1–3): 185–203. doi:10.1016/0004-3702(81)90024-2. hdl:1721.1/6337.Google ScholarDigital Library
- [23]C. Liu. Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. Doctoral Thesis. Massachusetts Institute of Technology. May 2009.Google ScholarDigital Library
- [24]T. Brox, A. Bruhn, N. Papenberg, and J.Weickert. High accuracy optical flow estimation based on a theory for warping. In European Conference on Computer Vision (ECCV), pages 25–36, 2004.Google ScholarCross Ref
- [25]A. Bruhn, J.Weickert and C. Schn¨orr. Lucas/Kanade meets Horn/Schunk: combining local and global optical flow methods. International Journal of Computer Vision (IJCV), 61(3):211–231, 2005.Google Scholar
- [26] https://github.com/pathak22/pyflowGoogle Scholar
- [27] Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and AndrewZisserman. 2020. Counting Out Time: Class Agnostic Video Repetition Countingin the Wild. InProceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition. 10387–10396.Google Scholar
- [28] Giorgos Karvounas, Iason Oikonomidis, and Antonis Argyros. 2019. ReActNet:Temporal Localization of Repetitive Activities in Real-World Videos.arXiv preprintarXiv:1910.06096(2019).Google Scholar
- [29] Ofir Levy and Lior Wolf. 2015. Live repetition counting. InProceedings of the IEEEinternational conference on computer vision. 3020–3028.Google ScholarDigital Library
- [30]Tom FH Runia, Cees GM Snoek, and Arnold WM Smeulders. 2018. Real-worldrepetition estimation by div, grad and curl. InProceedings of the IEEE conferenceon computer vision and pattern recognition. 9009–9017.Google Scholar
- [31] Hao-Yu Wu, Michael Rubinstein, Eugene Shih, John Guttag, Frédo Durand, andWilliam Freeman. 2012. Eulerian video magnification for revealing subtle changesin the world.ACM transactions on graphics (TOG)31, 4 (2012), 1–8Google Scholar
Recommendations
Controlling the Execution of a Visual Servoing Task
This paper presents the application of the Visual Servoing approach to a mobile robot which must execute coordinate motions in a known indoor environment. In this work, we are interested in the execution and control of basic motions like Go to an object ...
Image-based fuzzy trajectory tracking control for four-wheel steered mobile robots
A four-wheel steered mobile robot is fit for a higher power or improvement in the movement speed of a robot than a two-independent wheeled one. Since a steered mobile robot that slips very often cannot apply a popular dead-reckoning method using rotary ...
A Navigational Framework Combining Visual Servoing and Spiral Obstacle Avoidance Techniques
ICINCO 2014: Proceedings of the 11th International Conference on Informatics in Control, Automation and Robotics - Volume 2This paper presents a navigational framework which enables a robot to perform Long Range Navigation in the context of the Air-Cobot-Project in which a robot is used to execute an autonomous pre-flight inspection on an aircraft. The robot is equipped ...
Comments