Skip to main content

Vision-Based Text Segmentation System for Generic Display Units

  • Conference paper
Bioinspired Applications in Artificial and Natural Computation (IWINAC 2009)

Abstract

The increasing use of display units in avionics motivate the need for vision-based text recognition systems to assist humans. The system for generic displays proposed in this paper includes some of the usual text recognition steps, namely localization, extraction and enhancement, and optical character recognition. The proposal has been fully developed and tested on a multi-display simulator. The commercial OCR module from Matrox Imaging Library has been used to validate the textual displays segmentation proposal.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Abdel-Aziz, Y.I., Karara, H.M.: Direct linear transformation into object space coordinates in close-range photogrammetry. In: Proceedings of the Symposium on Close-Range Photogrametry, pp. 1–18 (1971)

    Google Scholar 

  2. Andersson, P., von Hofsten, C.: Readability of vertically vibrating aircraft displays. Displays 20, 23–30 (1999)

    Article  Google Scholar 

  3. Chang, F., Chen, G.C., Lin, C.C., Lin, W.H.: Caption analysis and recognition for building video indexing system. Multimedia Systems 10(4), 344–355 (2005)

    Article  Google Scholar 

  4. Faugeras, O.: Three-dimensional computer vision: A geometric viewpoint. MIT Press, Cambridge (1993)

    Google Scholar 

  5. Huang, S., Ahmadi, M., Sid-Ahmed, M.A.: A hidden Markov model-based character extraction method. Pattern Recognition (2008), doi:10.1016/j.patcog.2008.03.004

    Google Scholar 

  6. Jung, K., Kim, K.I., Jain, A.K.: Text information extraction in images and video: A survey. Pattern Recognition 37, 977–997 (2004)

    Article  Google Scholar 

  7. Kittler, J., Illingworth, J.: Minimum error thresholding. Pattern Recognition 19, 41–47 (1986)

    Article  Google Scholar 

  8. Lienhart, R.: Automatic text recognition in digital videos. In: Proceedings SPIE, Image and Video Processing IV, pp. 2666–2675 (1996)

    Google Scholar 

  9. Lin, C.J., Hsieh, Y.-H., Chen, H.-C., Chen, J.C.: Visual performance and fatigue in reading vibrating numeric displays. Displays (2008), doi:10.1016/j.displa.2007.12.004

    Google Scholar 

  10. Otsu, N.: A threshold selection method from gray-level histogram. IEEE Transactions on Systems, Man, and Cybernetics 9, 62–66 (1979)

    Article  Google Scholar 

  11. Pikaz, A., Averbuch, A.: Digital image thresholding based on topological stable state. Pattern Recognition 29, 829–843 (1996)

    Article  Google Scholar 

  12. Prewitt, J.M.S., Mendelsohn, M.L.: The analysis of cell images. Annals of the New York Academy of Sciences 128(3), 1035–1053 (1965)

    Article  Google Scholar 

  13. Sato, T., Kanade, T., Hughes, E.K., Smith, M.A., Satoh, S.: Video OCR: indexing digital news libraries by recognition of superimposed caption. ACM Multimedia Systems Special Issue on Video Libraries 7(5), 385–395 (1998)

    Article  Google Scholar 

  14. Sezgin, M., Sankur, B.: Survey over image thresholding techniques and quantitative performance evaluation. Journal of Electronic Imaging 13(1), 146–165 (2004)

    Article  Google Scholar 

  15. Stein, G.P.: Accurate internal camera calibration using rotation with analysis of sources of error. In: Proceedings of the Fifth International Conference on Computer Vision, p. 230 (1995)

    Google Scholar 

  16. Stein, G.P.: Lens distortion calibration using point correspondences. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 602–608 (1997)

    Google Scholar 

  17. Tsai, R.Y.: A versatile camera calibration technique for high accuracy 3-d maching vision metrology using off-the-shelf TV cameras and lenses. IEEE Journal of Robotics & Automation 3, 323–344 (1987)

    Article  Google Scholar 

  18. Wang, K., Kangas, J.A., Li, W.: Character segmentation of color images from digital camera. In: Proceedings of the International Conference on Document Analysis and Recognition, pp. 210–214 (2001)

    Google Scholar 

  19. Wang, J., Shi, F., Zhang, J., Liu, Y.: A new calibration model of camera lens distortion. Pattern Recognition 41(2), 607–615 (2008)

    Article  MATH  Google Scholar 

  20. Wolf, C., Jolion, J.: Extraction and recognition of artificial text in multimedia documents. Pattern Analysis and Applications 6, 309–326 (2003)

    MathSciNet  Google Scholar 

  21. Yan, H., Wu, J.: Character and line extraction from color map images using a multi-layer neural network. Pattern Recognition Letters 15, 97–103 (1994)

    Article  Google Scholar 

  22. Zhu, K., Qi, F., Jiang, R., Xu, L., Kimachi, M., Wu, Y., Aizawa, T.: Using adaboost to detect and segment characters from natural scenes. In: Proceedings of the International Workshop on Camera-based Document Analysis and Recognition, pp. 52–58 (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Castillo, J.C., López, M.T., Fernández-Caballero, A. (2009). Vision-Based Text Segmentation System for Generic Display Units. In: Mira, J., Ferrández, J.M., Álvarez, J.R., de la Paz, F., Toledo, F.J. (eds) Bioinspired Applications in Artificial and Natural Computation. IWINAC 2009. Lecture Notes in Computer Science, vol 5602. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02267-8_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02267-8_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02266-1

  • Online ISBN: 978-3-642-02267-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics