loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Authors: Xinyun Li 1 ; Ryosuke Furuta 2 ; Go Irie 1 ; Yota Yamamoto 1 and Yukinobu Taniguchi 1

Affiliations: 1 Department of Information and Computer Technology, Tokyo University of Science, Tokyo, Japan ; 2 Institute of Industrial Science, The University of Tokyo, Tokyo, Japan

Keyword(s): Indoor Localization, Image Recognition, Similarity Image Search, Scene Text Information.

Abstract: Due to the increasing complexity of indoor facilities such as shopping malls and train stations, there is a need for a new technology that can find the current location of the user of a smartphone or other device, as such facilities prevent the reception of GPS signals. Although many methods have been proposed for location estimation based on image search, accuracy is unreliable as there are many similar architectural indoors, and there are few features that are unique enough to offer unequivocal localization. Some methods increase the accuracy of location estimation by increasing the number of query images, but this increases the user’s burden of image capture. In this paper, we propose a method for accurately estimating the current indoor location based on question-response interaction from the user, without imposing greater image capture loads. Specifically, the proposal (i) generates questions using object detection and scene text detection, (ii) sequences the questions by minimi zing conditional entropy, and (iii) filters candidate locations to find the current location based on the user’s response. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 18.221.13.173

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Li, X.; Furuta, R.; Irie, G.; Yamamoto, Y. and Taniguchi, Y. (2023). Interactive Indoor Localization Based on Image Retrieval and Question Response. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP; ISBN 978-989-758-634-7; ISSN 2184-4321, SciTePress, pages 796-803. DOI: 10.5220/0011624300003417

@conference{visapp23,
author={Xinyun Li. and Ryosuke Furuta. and Go Irie. and Yota Yamamoto. and Yukinobu Taniguchi.},
title={Interactive Indoor Localization Based on Image Retrieval and Question Response},
booktitle={Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP},
year={2023},
pages={796-803},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011624300003417},
isbn={978-989-758-634-7},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023) - Volume 4: VISAPP
TI - Interactive Indoor Localization Based on Image Retrieval and Question Response
SN - 978-989-758-634-7
IS - 2184-4321
AU - Li, X.
AU - Furuta, R.
AU - Irie, G.
AU - Yamamoto, Y.
AU - Taniguchi, Y.
PY - 2023
SP - 796
EP - 803
DO - 10.5220/0011624300003417
PB - SciTePress