Abstract
Some precision assembly procedures are still manually operated on the industrial line. Precision assembly has the highest requirements in accuracy, which is characterized by a small range of 6D movement, a small tolerance between parts, and is full of rich contacts. It is also difficult to automate because of unintended block of sight, variational illumination, and cumulative motion errors. Therefore, this paper proposes a cross-modal image prediction network for precision assembly to address the above problems. The network predicts the representation vectors of the actual grayscale images of the end-effector. Self-supervised learning method is used to obtain the authentic representation vectors of reference images and actual images during training. Then these vectors will be predicted by combining the reference picture representation, robot force/torque feedback and position/pose of the end-effector. To visualize prediction performance, decoder trained by the above self-supervised network deconvolves the predicted representation vectors to generate predicted images, which can be compared with the original ones. Finally, USB-C insertion experiments are carried out to verify the algorithm performance, with hybrid force/position control being used for flexible assembly. The algorithm achieves a 96% assembly success rate, an average assembly steps of 5, and an average assembly time of about 5.8 s.
This work was supported partially by the NSFC-Shenzhen Robotics Basic Research Center Program (No. U1913208) and partially by the Shenzhen Science and Technology Program (No. JSGG20210420091602008).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hunt, A.J., Black, A.W.: Unit selection in a concatenative speech synthesis system using a large speech database. In: 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, vol. 1, pp. 373–376. IEEE (1996)
Kojima, A., Tamura, T., Fukunaga, K.: Natural language description of human activities from video images based on concept hierarchy of actions. Int. J. Comput. Vis. 50(2), 171–184 (2002). https://doi.org/10.1023/A:1020346032608
Rasiwasia, N., Costa Pereira, J., Coviello, E., et al.: A new approach to cross-modal multimedia retrieval. In: Proceedings of the 18th ACM International Conference on Multimedia, pp. 251–260 (2010)
Vinyals, O., Toshev, A., Bengio, S., et al.: Show and tell: a neural image caption generator. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156–3164 (2015)
Venugopalan, S., Xu, H., Donahue, J., et al.: Translating videos to natural language using deep recurrent neural networks. Comput. Sci. 3–9 (2014)
Li, Y., Zhu, J.Y., Tedrake, R., et al.: Connecting touch and vision via cross-modal prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10609–10618 (2019)
Li, A., Liu, R., Yang, X., Lou, Y.: Reinforcement learning strategy based on multimodal representations for high-precision assembly tasks. In: Liu, X.J., Nie, Z., Yu, J., Xie, F., Song, R. (eds.) Intelligent Robotics and Applications, vol. 13013, pp. 56–66. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89095-7_6
Lee, M.A., Zhu, Y., Zachares, P., et al.: Making sense of vision and touch: learning multimodal representations for contact-rich tasks. IEEE Trans. Robot. 36(3), 582–596 (2020)
Lee, M.A., Tan, M., Zhu, Y., et al.: Detect, reject, correct: crossmodal compensation of corrupted sensors. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 909–916. IEEE (2021)
Gu, S., Lillicrap, T., Sutskever, I., et al.: Continuous deep q-learning with model-based acceleration. In: International Conference on Machine Learning, pp. 2829–2838. PMLR (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, R., Li, A., Yang, X., Lou, Y. (2022). Precision Peg-In-Hole Assembly Based on Multiple Sensations and Cross-Modal Prediction. In: Liu, H., et al. Intelligent Robotics and Applications. ICIRA 2022. Lecture Notes in Computer Science(), vol 13458. Springer, Cham. https://doi.org/10.1007/978-3-031-13841-6_49
Download citation
DOI: https://doi.org/10.1007/978-3-031-13841-6_49
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-13840-9
Online ISBN: 978-3-031-13841-6
eBook Packages: Computer ScienceComputer Science (R0)