Skip to main content
Log in

Accessible images (AIMS): a model to build self-describing images for assisting screen reader users

  • Long Paper
  • Published:
Universal Access in the Information Society Aims and scope Submit manuscript

Abstract

Non-visual web access depends on the textual description of various non-text elements of web pages. The existing methods of describing images for non-visual access do not provide a strong coupling between described images and their description. If an image is reused multiple times either in a single Web site or across multiple times, it is required to keep the description at all instances. This paper presents a tightly coupled model termed accessible images (AIMS) which utilizes a steganography-based approach to embed the description in the images at the server side and updating alt text of the web pages with the description extracted with the help of a browser extension. The proposed AIMS model has been built, targeting toward a web image description ecosystem in which images evolve into a self-description phase. The primary advantage of the proposed AIMS model is the elimination of the redundant description of an image resource at multiple instances. The experiments conducted on a dataset confirm that the AIMS model is capable of embedding and extracting descriptions with an accuracy level of 99.6%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. https://pypi.python.org/pypi/PIL.

  2. https://github.com/shakoorst/AIMS-Dataset.

  3. The machine used for the test was an Intel Pentium (Core 2 Duo) with 2 GB of RAM and an Internet connection of 512 Kbps.

  4. http://github.com/jamesturk/jellyfish.

  5. https://github.com/jterrace/pyssim.

References

  1. Amin, P., Subbalakshmi, K.P.: Rotation and cropping resilient data hiding with Zernike moments. In: 2004 International Conference on Image Processing, 2004. ICIP’04, vol. 4, pp. 2175–2178. IEEE (2004)

  2. Bigham, J.P., Jayant, C., Ji, H., Little, G., Miller, A., Miller, R.C., Miller, R., Tatarowicz, A., White, B., White, S., et al.: Vizwiz: nearly real-time answers to visual questions. In: Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology, pp. 333–342. ACM (2010)

  3. Bigham, J.P., Kaminsky, R.S., Ladner, R.E., Danielsson, O.M., Hempton, G.L.: Webinsight:: making web images accessible. In: Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 181–188. ACM (2006)

  4. Brady, E.L., Zhong, Y., Morris, M.R., Bigham, J.P.: Investigating the appropriateness of social network question asking as a resource for blind users. In: Proceedings of the 2013 Conference on Computer Supported Cooperative Work, pp. 1225–1236. ACM (2013)

  5. Condron, M.: Managing the Digital You: Where and How to Keep and Organize Your Digital Life, 1st edn. Rowman & Littlefield Publishers, Lanham (2017)

    Google Scholar 

  6. Eggert, E., Abou-Zahra, S.: Images concepts. https://www.w3.org/WAI/tutorials/images/tips/ (2015)

  7. Fang, H., Gupta, S., Iandola, F., Srivastava, R.K., Deng, L., Dollár, P., Gao, J., He, X., Mitchell, M., Platt, J.C., et al.: From captions to visual concepts and back. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1473–1482 (2015)

  8. Freedom Scientific: The world’s most popular windows screen reader. http://www.freedomscientific.com/Products/Blindness/JAWS (2016)

  9. Goljan, M., Fridrich, J.J., Du, R.: Distortion-free data embedding for images. In: International Workshop on Information Hiding, pp. 27–41. Springer (2001)

  10. Goodwin, M., Susar, D., Nietzio, A., Snaprud, M., Jensen, C.S.: Global web accessibility analysis of national government portals and ministry web sites. J. Inf. Technol. Politics 8(1), 41–67 (2011)

    Article  Google Scholar 

  11. Guo, A., Chen, X., Qi, H., White, S., Ghosh, S., Asakawa, C., Bigham, J.P.: Vizlens: A robust and interactive screen reader for interfaces in the real world. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp. 651–664. ACM (2016)

  12. Hamid, N., Yahya, A., Ahmad, R.B., Al-Qershi, O.M.: Image steganography techniques: an overview. Int. J. Comput. Sci. Secur. 6(3), 168–187 (2012)

    Google Scholar 

  13. Henry, S.L., McGee., L.: Accessibility. https://www.w3.org/standards/webdesign/accessibility (2016)

  14. Jaro, M.A.: Advances in record-linkage methodology as applied to matching the 1985 census of Tampa, Florida. J. Am. Stat. Assoc. 84(406), 414–420 (1989)

    Article  Google Scholar 

  15. Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128–3137 (2015)

  16. Krause, J., Johnson, J., Krishna, R., Fei-Fei, L.: A hierarchical approach for generating descriptive image paragraphs (2016). arXiv preprint arXiv:1611.06607

  17. Li, B., He, J., Huang, J., Shi, Y.Q.: A survey on image steganography and steganalysis. J. Inf. Hiding Multimed. Signal Process. 2(2), 142–172 (2011)

    Google Scholar 

  18. Lin, E.T., Delp, E.J.: A review of data hiding in digital images. In: IS and TS PICS Conference, pp. 274–278. Society for Imaging Science & Technology (1999)

  19. Michael Curran, J.T.: Empowering lives through non-visual access to technology. https://www.nvaccess.org/ (2017)

  20. Morkel, T., Eloff, J.H., Olivier, M.S.: An overview of image steganography. In: ISSA, pp. 1–11 (2005)

  21. Park, E., Lim, H.: A study on improvement of evaluation method on web accessibility automatic evaluation tool’s <img> alternative texts based on OCR (2015)

  22. Park, E., Lim, H.: A study on providing alternative text of image for web accessibility improvement. Int. J. Appl. Eng. Res. 11(2), 762–765 (2016)

    Google Scholar 

  23. Parulski, K.A., McCoy, J.R.: Method for adding personalized metadata to a collection of digital images. US Patent 6629104, 2003

  24. Parulski, K.A., McCoy, J.R.: Digital camera for capturing images and selecting metadata to be associated with the captured images. US Patent 7171113, 2007

  25. Pennsylvania State University: Image alt tag tips for html. http://accessibility.psu.edu/images/imageshtml/ (2016)

  26. Petrie, H., Harrison, C., Dev, S.: Describing images on the web: a survey of current practice and prospects for the future. In: Proceedings of Human Computer Interaction International (HCII), vol. 71 (2005)

  27. Potdar, V.M., Han, S., Chang, E.: Fingerprinted secret sharing steganography for robustness against image cropping attacks. In: 2005 3rd IEEE International Conference on Industrial Informatics, 2005. INDIN’05, pp. 717–724. IEEE (2005)

  28. Pradhan, A., Sahu, A.K., Swain, G., Sekhar, K.R.: Performance evaluation parameters of image steganography techniques. In: International Conference on Research Advances in Integrated Navigation Systems (RAINS), pp. 1–8. IEEE (2016)

  29. Queirolo, F.: Steganography in images. Final Communications Report, vol. 3. http://eric.purpletree.org/file/Steganography%20In%20Images.pdf (2011)

  30. Rana, M.: Parameter evaluation and comparison of algorithms used in steganography. Int. J. Eng. Sci. Comput. https://doi.org/10.4010/2016.1901 (2016)

  31. Shin, H., Lim, J., Park, J.: Information visualization and information presentation for visually impaired people. ETRI J. 28(1), 81–91 (2013)

    Google Scholar 

  32. Steve, F.: Html5: Techniques for providing useful text alternatives. https://www.w3.org/TR/2011/WD-html-alt-techniques-20110113/ (2017)

  33. Tran, K., He, X., Zhang, L., Sun, J., Carapcea, C., Thrasher, C., Buehler, C., Sienkiewicz, C.: Rich image captioning in the wild (2016). arXiv preprint arXiv:1603.09016

  34. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: lessons learned from the 2015 MSCOCO image captioning challenge. IEEE Trans. Pattern Anal. Mach. Intell. 39, 652–663 (2016)

    Article  Google Scholar 

  35. Von Ahn, L., Dabbish, L.: Labeling images with a computer game. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 319–326. ACM (2004)

  36. Von Ahn, L., Ginosar, S., Kedia, M., Liu, R., Blum, M.: Improving accessibility of the web with a computer game. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 79–82. ACM (2006)

  37. W3C Working Group on Cascading Style Sheets: Media queries level 4-w3c working draft. https://www.w3.org/TR/mediaqueries-4/ (2016)

  38. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  39. WHO: “visual impairment and blindness”, fact sheet 2014. http://www.who.int/mediacentre/factsheets/fs282/en/ (2014)

  40. Winkler, W.E.: String comparator metrics and enhanced decision rules in the Fellegi–Sunter model of record linkage (1990)

  41. Wu, Q., Shen, C., Hengel, A.v.d., Wang, P., Dick, A.: Image captioning and visual question answering based on attributes and their related external knowledge (2016). arXiv preprint arXiv:1603.02814

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. S. Kuppusamy.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nengroo, A.S., Kuppusamy, K.S. Accessible images (AIMS): a model to build self-describing images for assisting screen reader users. Univ Access Inf Soc 17, 607–619 (2018). https://doi.org/10.1007/s10209-017-0607-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10209-017-0607-z

Keywords

Navigation