Hostname: page-component-7c8c6479df-94d59 Total loading time: 0 Render date: 2024-03-28T16:39:02.823Z Has data issue: false hasContentIssue false

Target-tools recognition method based on an image feature library for space station cabin service robots

Published online by Cambridge University Press:  28 July 2014

Lingbo Cheng
Affiliation:
IRI, School of Mechatronic Engineering, Beijing Institute of Technology, Beijing, China Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, China Key Laboratory of Intelligent Control and Decision of Complex System, China
Zhihong Jiang
Affiliation:
IRI, School of Mechatronic Engineering, Beijing Institute of Technology, Beijing, China Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, China Key Laboratory of Intelligent Control and Decision of Complex System, China
Hui Li*
Affiliation:
IRI, School of Mechatronic Engineering, Beijing Institute of Technology, Beijing, China Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, China Key Laboratory of Intelligent Control and Decision of Complex System, China
Bo Wei
Affiliation:
IRI, School of Mechatronic Engineering, Beijing Institute of Technology, Beijing, China Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, China Key Laboratory of Intelligent Control and Decision of Complex System, China
Qiang Huang
Affiliation:
IRI, School of Mechatronic Engineering, Beijing Institute of Technology, Beijing, China Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, China Key Laboratory of Intelligent Control and Decision of Complex System, China
*
*Corresponding author. E-mail: lihui2011@bit.edu.cn

Summary

This paper presents a method to improve the speed and accuracy rate for space robot visual target recognition based on illumination and affine invariant feature extraction. The method takes illumination changes, strong nonlinear light due to refraction and reflection, target affine transformation and occlusion into consideration, all of which occur on the cabin target surface and affect the target recognition accuracy seriously. In this paper, a method is proposed to capture a same target at multi-viewpoints to establish feature library for high recognition accuracy and speed at any viewpoint. By using an analysis of the light intensity and gray level transformation, we obtain the corrected image which reduce the influence of illumination change. Then the affine moment invariants features of the correction images at multi-viewpoints were extracted and the average feature datum were stored in the library. To verify the validity of the method, a robot vision system provided images, followed by image preprocessing, dynamic local threshold segmentation and feature extraction. These methods were verified on a target recognition system of space robot built for this research. The experimental results showed that the methods were feasible and effective.

Type
Articles
Copyright
Copyright © Cambridge University Press 2014 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.De-han, W. and Wei-fen, H., “Astronaut task analysis,” Aerospace China 7, 3740 (1997).Google Scholar
2.Zheng, W., Li, W. and Run-tao, XU, “Study on operational reliability of astronaut in spacecraft cabin,” Space Med. Med. Eng. 19 (5), 358362 (2006).Google Scholar
3.Jin-dun, C., “Safety analysis for astronaut and the personal protective equipment,” Space Med. Med. Eng. 12 (6), 418422 (1999).Google Scholar
4.Qian-xiang, Z., “The research of human reliability in manned spaceflight,” Aerospace Shanghai 4, 2629 (2001).Google Scholar
6.Qian-xiang, Z., “Lighting design ergonomics research in manned space mission,” Chinese J. Illuminating Eng. 17 (2), 15 (2006).Google Scholar
7.Funt, B. V. and Finlayson, G. D., “Color constant color indexing,” IEEE Trans. Pattern Anal. Mach. Intell. 17 (5), 522529 (1995).CrossRefGoogle Scholar
8.Swain, M. J. and Ballad, D H., “Color indexing,” Int. J. Comput. Vis. 7 (1), 1132 (1991).Google Scholar
9.Nayar, S. K. and Bolle, R. M., “Reflectance based object recognition,” Int. J. Comput. Vis. 17 (3), 219240 (1996).CrossRefGoogle Scholar
10.Gevers, T. and Smeulders, A. W. M., “Color based object recognition,” Pattern Recognition 32 (3), 453464 (1999).Google Scholar
11.Murat, G. and Bahadir, K., “Illumination robust interest point detection,” Comput. Vis. Image Understanding 113, 565571 (2009).Google Scholar
12.Hu, M. K., “Visual pattern recognition by moment invariants,” IER Trans. Inform. Theory 8 (2), 179187 (1962).Google Scholar
13.Reiss, T. H., “The revised fundamental theorem of moment invariants,” IEEE Trans. Pattern Anal. Mach. Intell. 13 (8), 830834 (1991).CrossRefGoogle Scholar
14.Flusser, J. and Suk, T., “Pattern recognition by sffine moment invariants,” Pattern Recognition 26 (1), 167174 (1993).CrossRefGoogle Scholar
15.Flusser, J. and Suk, T., “Pattern recognition by means of affine moment invariants,” Res. Rep. 1726, Institute of Information Theory and Automation, 1991.Google Scholar
16.Yang, Z. and Cohen, F. S., “Cross- weighted moments and affine invariants for image registration and matching,” IEEE Trans. Pattern Anal. Mach. Intell. 21 (8), 804814 (1999).CrossRefGoogle Scholar
17.Suk, T. and Flusser, J., “Combined blur and affine moments invariant and their use in pattern recognition,” Pattern Recognition 36, 28952907 (2003).Google Scholar
18.Flusser, J., Suk, T. and Zitova, B., Moments and moment invariants in pattern recognition (Wiley, Chichester, 2009).Google Scholar
19.Suk, T. and Flusser, J., “Affine moment invariants generated by graph method,” Pattern Recognition 44 (9), 20472056 (2011).Google Scholar
20.Lowe, D. G., “Object recognition from local scale invariant features,” In: Proceedings of the International Conference on Computer Vision Corfu Greece (1999) pp. 1150–1157.Google Scholar
21.Lowe, D. G., “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60 (2), 91110 (2004).Google Scholar
22.Morel, J. M. and Yu, G., “ASIFT: A new framework for fully affine invariant image comparison,” SIAM J. Imaging Sci. 2 (2), 131 (2009).CrossRefGoogle Scholar
23.Schur, I., Vorlesungen Uber Invariantentheorie (Springer, Berlin 1968).Google Scholar