skip to main content
10.1145/2700648.2809847acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article

Zebra Crossing Spotter: Automatic Population of Spatial Databases for Increased Safety of Blind Travelers

Published: 26 October 2015 Publication History

Abstract

In this paper we propose a computer vision-based technique that mines existing spatial image databases for discovery of zebra crosswalks in urban settings. Knowing the location of crosswalks is critical for a blind person planning a trip that includes street crossing. By augmenting existing spatial databases (such as Google Maps or OpenStreetMap) with this information, a blind traveler may make more informed routing decisions, resulting in greater safety during independent travel. Our algorithm first searches for zebra crosswalks in satellite images; all candidates thus found are validated against spatially registered Google Street View images. This cascaded approach enables fast and reliable discovery and localization of zebra crosswalks in large image datasets. While fully automatic, our algorithm could also be complemented by a final crowdsourcing validation stage for increased accuracy.

References

[1]
D. Ahmetovic, C. Bernareggi, A. Gerino, and S. Mascetti. Zebrarecognizer: Efficient and precise localization of pedestrian crossings. In Int. Conf. on Pattern Recognition. IEEE, 2014.
[2]
C. Akinlar and C. Topal. Edlines: A real-time line segment detector with a false detection control. Pattern Recognition Letters, 2011.
[3]
J. Barlow, B. Bentzen, D. Sauerburger, and L. Franck. Teaching travel at complex intersections. Foundations of Orientation and Mobility, 2010.
[4]
J. Coughlan and H. Shen. Crosswatch: a system for providing guidance to visually impaired travelers at traffic intersection. Jour. of Assistive Technologies, 2013.
[5]
K. Fitzpatrick, S. T. Chrysler, V. Iragavarapu, and E. S. Park. Crosswalk marking field visibility study. Technical report, 2010.
[6]
M. F. Goodchild. Citizens as sensors: the world of volunteered geography. GeoJournal, 2007.
[7]
R. Guy and K. Truong. Crossingguard: exploring information content in navigation aids for visually impaired pedestrians. In Conf. on Human Factors in Computing Systems. ACM, 2012.
[8]
K. Hara, S. Azenkot, M. Campbell, C. L. Bennett, V. Le, S. Pannella, R. Moore, K. Minckler, R. H. Ng, and J. E. Froehlich. Improving public transit accessibility for blind riders by crowdsourcing bus stop landmark locations with google street view. In Int. Conf. on Computers and Accessibility. ACM, 2013.
[9]
K. Hara, V. Le, and J. Froehlich. Combining crowdsourcing and google street view to identify street-level accessibility problems. In Conf. on Human Factors in Computing Systems. ACM, 2013.
[10]
K. Hara, V. Le, J. Sun, D. Jacobs, and J. Froehlich. Exploring early solutions for automatically identifying inaccessible sidewalks in the physical world using google street view. HCI Consortium, 2013.
[11]
H. H. Hochmair, D. Zielstra, and P. Neis. Assessing the completeness of bicycle trails and designated lane features in openstreetmap for the united states and europe. In Transportation Research Board Annual Meeting, 2013.
[12]
M. Kubásek, J. Hrebıcek, et al. Crowdsource approach for mapping of illegal dumps in the czech republic. Int. Jour. of Spatial Data Infrastructures Research, 2013.
[13]
J. Leitloff, S. Hinz, and U. Stilla. Vehicle detection in very high resolution satellite images of city areas. Trans. on Geoscience and Remote Sensing, 2010.
[14]
M. Mokhtarzade and M. V. Zoej. Road detection from high-resolution satellite images using artificial neural networks. Int. jour. of applied earth observation and geoinformation, 2007.
[15]
V. Murali and J. M. Coughlan. Smartphone-based crosswalk detection and localization for visually impaired pedestrians. In Int. Conf. on Multimedia and Expo (workshop). IEEE, 2013.
[16]
S. Prasain. Stopfinder: improving the experience of blind public transit riders with crowdsourcing. In Int. Conf. on Computers and Accessibility. ACM, 2011.
[17]
M. T. Rice, A. O. Aburizaiza, R. D. Jacobson, B. M. Shore, and F. I. Paez. Supporting accessibility for blind and vision-impaired people with a localized gazetteer and open source geotechnology. Transactions in GIS, 2012.
[18]
T. Senlet and A. Elgammal. Segmentation of occluded sidewalks in satellite images. In Int. Conf. on Pattern Recognition. IEEE, 2012.
[19]
B. Sirmacek and C. Unsalan. Urban-area and building detection using sift keypoints and graph theory. Trans. on Geoscience and Remote Sensing, 2009.
[20]
W. R. Wiener, R. L. Welsh, and B. B. Blasch. Foundations of orientation and mobility. American Foundation for the Blind, 2010.
[21]
J. Xiao and L. Quan. Multiple view semantic segmentation for street view images. In Int. Conf. on Computer Vision. IEEE, 2009.
[22]
A. R. Zamir and M. Shah. Accurate image localization based on google maps street view. In Oroc. of. European Conf. on Computer Vision. Springer, 2010.

Cited By

View all
  • (2024)Human–AI Collaboration for Remote Sighted Assistance: Perspectives from the LLM EraFuture Internet10.3390/fi1607025416:7(254)Online publication date: 18-Jul-2024
  • (2024)Unblind Text Inputs: Predicting Hint-text of Text Input in Mobile Apps via LLMProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642939(1-20)Online publication date: 11-May-2024
  • (2024)How to Detect Occluded Crosswalks in Overview Images? Comparing Three Methods in a Heavily Occluded AreaInternational Journal of Transportation Science and Technology10.1016/j.ijtst.2024.04.001Online publication date: Apr-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ASSETS '15: Proceedings of the 17th International ACM SIGACCESS Conference on Computers & Accessibility
October 2015
466 pages
ISBN:9781450334006
DOI:10.1145/2700648
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 October 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. autonomous navigation
  2. crowdsourcing
  3. orientation and mobility
  4. satellite and street-level imagery
  5. visual impairments and blindness

Qualifiers

  • Research-article

Funding Sources

Conference

ASSETS '15
Sponsor:

Acceptance Rates

ASSETS '15 Paper Acceptance Rate 30 of 127 submissions, 24%;
Overall Acceptance Rate 436 of 1,556 submissions, 28%

Upcoming Conference

ASSETS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)38
  • Downloads (Last 6 weeks)2
Reflects downloads up to 16 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Human–AI Collaboration for Remote Sighted Assistance: Perspectives from the LLM EraFuture Internet10.3390/fi1607025416:7(254)Online publication date: 18-Jul-2024
  • (2024)Unblind Text Inputs: Predicting Hint-text of Text Input in Mobile Apps via LLMProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642939(1-20)Online publication date: 11-May-2024
  • (2024)How to Detect Occluded Crosswalks in Overview Images? Comparing Three Methods in a Heavily Occluded AreaInternational Journal of Transportation Science and Technology10.1016/j.ijtst.2024.04.001Online publication date: Apr-2024
  • (2024)GMC: A general framework of multi-stage context learning and utilization for visual detection tasksComputer Vision and Image Understanding10.1016/j.cviu.2024.103944241(103944)Online publication date: Apr-2024
  • (2023)A11yFutures: Envisioning the Future of Accessibility ResearchProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3615652(1-4)Online publication date: 22-Oct-2023
  • (2023)Case Study: In-the-Field Accessibility Information Collection Using GamificationProceedings of the 20th International Web for All Conference10.1145/3587281.3587288(66-74)Online publication date: 30-Apr-2023
  • (2023)Automating intersection marking data collection and condition assessment at scale with an artificial intelligence-powered systemComputational Urban Science10.1007/s43762-023-00098-73:1Online publication date: 13-Jul-2023
  • (2022)MultiCLU: Multi-stage Context Learning and Utilization for Storefront Accessibility Detection and EvaluationProceedings of the 2022 International Conference on Multimedia Retrieval10.1145/3512527.3531361(304-312)Online publication date: 27-Jun-2022
  • (2022)Maptimizer: Using Optimization to Tailor Tactile Maps to Users NeedsProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3517436(1-15)Online publication date: 29-Apr-2022
  • (2022)Gamification strategies to improve the motivation and performance in accessibility information collectionExtended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491101.3519783(1-7)Online publication date: 27-Apr-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media