Abstract
This research aimed to develop a method that can be used for both the self-localization and correction of dead reckoning, from photographed images. Therefore, this research applied two methods to estimate position from the surrounding environment and position from the lengths between the own position and the targets. Convolutional neural network (CNN) and convolutional long short-term memory (CLSTM) were used as a method of self-localization. Panorama images and general images were used as input data. As a result, the method that uses “CNN with the pooling layer partially eliminated and a panorama image for input, calculates the intersection of a circle from the lengths between the own position and the targets, adopts three points with the closest intersection, and do not estimate own position if the closest intersection has a large error” was the most accurate. The total accuracy was 0.217 [m] for the x-coordinate and y-coordinate. As the room measured about 12 [m] by 12 [m] in size along with only about 3,000 training data, the error was considered to be small.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kuleshov, A., Bernstein, A., Burnaev, E., Yanovich, Y.: Machine learning in appearance-based robot selflocalization. In: 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE Conference Publications (2017)
Pauli, J.: Learning-Based Robot Vision: Principles and Applications. Lecture Notes in Computer Science, vol. 2048, 292 p. Springer, Heidelberg (2001)
Nakagawa, M., Akano, K., Kobayashi, T., Sekiguchi, Y.: Relative panoramic camera position estimation for image-based virtual reality networks in indoor environments. In: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. IV-2/W4 (2017)
Tateno, K., Tombari, F., Laina, I., Navab, N.: CNN-SLAM: real-time dense monocular SLAM with learned depth prediction. arXiv:1704.03489 (2017)
LeCun, Yann, Bottou, Leon, Bengio, Yoshua, Haffner, Patrick: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong, W.-K., Woo, W.-C.: Convolutional LSTM network: a machine learning approach for precipitation nowcasting. arXiv:1506.04214 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. arXiv:1502.01852 (2015)
loffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Hashimoto, S., Namihira, K. (2020). Self-localization from a 360-Degree Camera Based on the Deep Neural Network. In: Arai, K., Kapoor, S. (eds) Advances in Computer Vision. CVC 2019. Advances in Intelligent Systems and Computing, vol 943. Springer, Cham. https://doi.org/10.1007/978-3-030-17795-9_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-17795-9_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-17794-2
Online ISBN: 978-3-030-17795-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)