skip to main content
10.1145/3491102.3502081acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Cocomix: Utilizing Comments to Improve Non-Visual Webtoon Accessibility

Authors Info & Claims
Published:29 April 2022Publication History

ABSTRACT

Webtoon is a type of digital comics read online where readers can leave comments to share their thoughts on the story. While it has experienced a surge in popularity internationally, people with visual impairments cannot enjoy webtoon with the lack of an accessible format. While traditional image description practices can be adopted, resulting descriptions cannot preserve webtoons’ unique values such as control over the reading pace and social engagement through comments. To improve the webtoon reading experience for BLV users, we propose Cocomix, an interactive webtoon reader that leverages comments into the design of novel webtoon interactions. Since comments can identify story highlights and provide additional context, we designed a system that provides 1) comments-based adaptive descriptions with selective access to details and 2) panel-anchored comments for easy access to relevant descriptive comments. Our evaluation (N=12) showed that Cocomix users could adapt the description for various needs and better utilize comments.

Skip Supplemental Material Section

Supplemental Material

3491102.3502081-talk-video.mp4

mp4

62.3 MB

3491102.3502081-video-preview.mp4

mp4

5 MB

References

  1. Dragan Ahmetovic, Nahyun Kwon, Uran Oh, Cristian Bernareggi, and Sergio Mascetti. 2021. Touch Screen Exploration of Visual Artwork for Blind People. In Proceedings of the Web Conference 2021. 2781–2791.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Kholoud Khalil Aldous, Jisun An, and Bernard J Jansen. 2019. View, like, comment, post: Analyzing user engagement by topic at 4 levels across 5 social media platforms for 53 news organizations. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 13. 47–57.Google ScholarGoogle ScholarCross RefCross Ref
  3. Chad Allen. [n.d.]. Unseen. Retrieved August 30, 2021 from https://www.unseencomic.com/Google ScholarGoogle Scholar
  4. Alphatart&Sumpul. [n.d.]. The Remarried Empress. Retrieved September 2, 2021 from https://www.webtoons.com/en/fantasy/the-remarried-empress/list?title_no=2135Google ScholarGoogle Scholar
  5. Alphatart/Sumpul. [n.d.]. The Remarried Empress. Retrieved August 30, 2021 from https://www.webtoons.com/en/fantasy/the-remarried-empress/list?title_no=2135Google ScholarGoogle Scholar
  6. Kohei Arai and Herman Tolle. 2010. Automatic e-comic content adaptation. International Journal of Ubiquitous Computing 1, 1 (2010), 1–11.Google ScholarGoogle Scholar
  7. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2016. A simple but tough-to-beat baseline for sentence embeddings. (2016).Google ScholarGoogle Scholar
  8. artweb. [n.d.]. DESCRIVEDDO GUIDELINES. Retrieved September 9, 2021 from https://artweb.netlify.app/desc_enGoogle ScholarGoogle Scholar
  9. asimplebengo. [n.d.]. Hybrid. Retrieved September 2, 2021 from https://www.webtoons.com/en/challenge/hybrid/list?title_no=211861Google ScholarGoogle Scholar
  10. audiobook. [n.d.]. a movie in your mind. Retrieved August 30, 2021 from https://www.graphicaudiointernational.net/Google ScholarGoogle Scholar
  11. Olivier Augereau, Motoi Iwata, and Koichi Kise. 2018. A survey of comics research in computer science. Journal of imaging 4, 7 (2018), 87.Google ScholarGoogle ScholarCross RefCross Ref
  12. braille comics. [n.d.]. Laville Braille. Retrieved August 30, 2021 from http://www.lavillebraille.fr/des-livres-a-voir-et-a-toucher/Google ScholarGoogle Scholar
  13. João MC Correia and Abel JP Gomes. 2016. Balloon extraction from complex comic books using edge detection and histogram scoring. Multimedia Tools and Applications 75, 18 (2016), 11367–11390.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Jakob Dittmar. 2014. Comics for the blind and for the seeing. International Journal of Comic Art; 1 16 (2014).Google ScholarGoogle Scholar
  15. David Dubray and Jochen Laubrock. 2019. Deep CNN-based speech balloon detection and segmentation for comic books. In 2019 International Conference on Document Analysis and Recognition (ICDAR). IEEE, 1237–1243.Google ScholarGoogle ScholarCross RefCross Ref
  16. Arpita Dutta and Samit Biswas. 2019. CNN based extraction of panels/characters from bengali comic book page images. In 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW), Vol. 1. IEEE, 38–43.Google ScholarGoogle ScholarCross RefCross Ref
  17. Teresa Kardoulias Sarah Stephenson Keyes Elisabeth Salzhauer Axel, Virginia Hooper and Francesca Rosenberg. [n.d.]. AEB’s Guidelines for Verbal Description. Retrieved September 9, 2021 from http://www.artbeyondsight.org/handbook/acs-guidelines.shtmlGoogle ScholarGoogle Scholar
  18. EmiMG. [n.d.]. ZomCom. Retrieved September 2, 2021 from https://www.webtoons.com/en/challenge/zomcom/list?title_no=70195Google ScholarGoogle Scholar
  19. Siamak Faridani, Ephrat Bitton, Kimiko Ryokai, and Ken Goldberg. 2010. Opinion space: a scalable tool for browsing online comments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1175–1184.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Anita Fidyka and Anna Matamala. 2018. Audio description in 360º videos. Translation Spaces (2018).Google ScholarGoogle Scholar
  21. Pet Foolery. [n.d.]. Pixie and Brutus. Retrieved September 2, 2021 from https://www.webtoons.com/en/challenge/pixie-and-brutus/list?title_no=452175Google ScholarGoogle Scholar
  22. Pet Foolery. [n.d.]. Pixie and Brutus. Retrieved August 30, 2021 from https://www.webtoons.com/en/challenge/pixie-and-brutus/list?title_no=452175Google ScholarGoogle Scholar
  23. Benjamin Fraser. 2020. Tactile comics, disability studies and the mind’s eye: on “A Boat Tour”(2017) in Venice with Max. Journal of Graphic Novels and Comics(2020), 1–13.Google ScholarGoogle Scholar
  24. Graham R Gibbs. 2007. Thematic coding and categorizing. Analyzing qualitative data 703 (2007), 38–56.Google ScholarGoogle ScholarCross RefCross Ref
  25. Cole Gleason, Amy Pavel, Himalini Gururaj, Kris Kitani, and Jeffrey P Bigham. 2020. Making GIFs Accessible.. In ASSETS. 24–1.Google ScholarGoogle Scholar
  26. Cole Gleason, Amy Pavel, Xingyu Liu, Patrick Carrington, Lydia B Chilton, and Jeffrey P Bigham. 2019. Making memes accessible. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility. 367–376.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Cole Gleason, Amy Pavel, Emma McCamey, Christina Low, Patrick Carrington, Kris M Kitani, and Jeffrey P Bigham. 2020. Twitter A11y: A browser extension to make Twitter images accessible. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Matthew Honnibal and Mark Johnson. 2015. An Improved Non-monotonic Transition System for Dependency Parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal, 1373–1378. https://aclweb.org/anthology/D/D15/D15-1162Google ScholarGoogle ScholarCross RefCross Ref
  29. Mohit Iyyer, Varun Manjunatha, Anupam Guha, Yogarshi Vyas, Jordan Boyd-Graber, Hal Daume, and Larry S Davis. 2017. The amazing mysteries of the gutter: Drawing inferences between panels in comic book narratives. In Proceedings of the IEEE Conference on Computer Vision and Pattern recognition. 7186–7195.Google ScholarGoogle ScholarCross RefCross Ref
  30. Jakka. [n.d.]. Independant Diary. Retrieved August 30, 2021 from https://comic.naver.com/webtoon/list?titleId=748105&no=78&weekday=thuGoogle ScholarGoogle Scholar
  31. JH. [n.d.]. The Boxer. Retrieved August 30, 2021 from https://comic.naver.com/webtoon/list?titleId=736989&weekday=thuGoogle ScholarGoogle Scholar
  32. Violet Karim. [n.d.]. Familiar Feelings. Retrieved September 2, 2021 from https://www.webtoons.com/en/challenge/familiar-feelings/list?title_no=323558Google ScholarGoogle Scholar
  33. Hyunwoo Kim, Haesoo Kim, Kyung Je Jo, and Juho Kim. 2021. StarryThoughts: Facilitating Diverse Opinion Exploration on Social Issues. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1(2021), 1–29.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Jeffrey SJ Kirchoff. 2013. It’s just not the same as print (and it shouldn’t be): Rethinking the possibilities of digital comics. Technoculture: An Online Journal of Technology in Society 3, 1(2013).Google ScholarGoogle Scholar
  35. Soyoung Kwon and Kun-Pyo Lee. 2016. What makes readers laugh? value of sensing laughter for humor webtoon. In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. 867–874.Google ScholarGoogle Scholar
  36. Yunjung Lee, Hwayeon Joh, Suhyeon Yoo, and Uran Oh. 2021. AccessComics: an accessible digital comic book reader for people with visual impairments. In Proceedings of the 18th International Web for All Conference. 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Luyuan Li, Yongtao Wang, Liangcai Gao, Zhi Tang, and Ching Y Suen. 2014. Comic2CEBX: A system for automatic comic content adaptation. In IEEE/ACM Joint Conference on Digital Libraries. IEEE, 299–308.Google ScholarGoogle Scholar
  38. Guanhong Liu, Xianghua Ding, Chun Yu, Lan Gao, Xingyu Chi, and Yuanchun Shi. 2019. ” I Bought This for Me to Look More Ordinary” A Study of Blind People Doing Online Shopping. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Yang Liu. 2019. Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318(2019).Google ScholarGoogle Scholar
  40. Zongyang Ma, Aixin Sun, Quan Yuan, and Gao Cong. 2012. Topic-driven reader comments summarization. In Proceedings of the 21st ACM international conference on Information and knowledge management. 265–274.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Scott McCloud. 1993. Understanding comics: The invisible art. Northampton, Mass (1993).Google ScholarGoogle Scholar
  42. Chris McCoy. [n.d.]. Safely. Retrieved September 2, 2021 from https://www.webtoons.com/en/comedy/safely-endangered/list?title_no=352Google ScholarGoogle Scholar
  43. Mimino666. [n.d.]. langdetect: Port of Google;s language-detection library. Retrieved September 9, 2021 from https://github.com/Mimino666/langdetect#languagesGoogle ScholarGoogle Scholar
  44. Elaheh Momeni, Claire Cardie, and Myle Ott. 2013. Properties, prediction, and prevalence of useful user-generated comments for descriptive annotation of social media objects. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 7.Google ScholarGoogle Scholar
  45. Elaheh Momeni, Ke Tao, Bernhard Haslhofer, and Geert-Jan Houben. 2013. Identification of useful user comments in social media: A case study on Flickr commons. In Proceedings of the 13th ACM/IEEE-CS joint conference on Digital libraries. 1–10.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Valerie S Morash, Yue-Ting Siu, Joshua A Miele, Lucia Hasty, and Steven Landau. 2015. Guiding novice web workers in making image descriptions using templates. ACM Transactions on Accessible Computing (TACCESS) 7, 4 (2015), 1–21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Giulio Mori, Maria Claudia Buzzi, Marina Buzzi, and Barbara Leporini. 2010. Structured audio podcasts via web text-to-speech system. In Proceedings of the 19th international conference on World wide web. 1281–1284.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Meredith Ringel Morris, Jazette Johnson, Cynthia L Bennett, and Edward Cutrell. 2018. Rich representations of visual content for screen reader users. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Murrz. [n.d.]. Murrz. Retrieved September 2, 2021 from https://www.webtoons.com/en/slice-of-life/murrz/list?title_no=1281Google ScholarGoogle Scholar
  50. Feng Nan, Ramesh Nallapati, Zhiguo Wang, Cicero Nogueira dos Santos, Henghui Zhu, Dejiao Zhang, Kathleen McKeown, and Bing Xiang. 2021. Entity-level Factual Consistency of Abstractive Text Summarization. arXiv preprint arXiv:2102.09130(2021).Google ScholarGoogle Scholar
  51. Anime News Network. [n.d.]. Japanese Volunteers Transcribe Manga for Blind People. Retrieved August 30, 2021 from https://www.animenewsnetwork.com/news/2007-07-24/japanese-volunteers-transcribe-manga-for-blind-peopleGoogle ScholarGoogle Scholar
  52. Nhu-Van Nguyen, Christophe Rigaud, and Jean-Christophe Burie. 2017. Comic characters detection using deep learning. In 2017 14th IAPR international conference on document analysis and recognition (ICDAR), Vol. 3. IEEE, 41–46.Google ScholarGoogle ScholarCross RefCross Ref
  53. Toru Ogawa, Atsushi Otsubo, Rei Narita, Yusuke Matsui, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2018. Object detection for comics using manga109 annotations. arXiv preprint arXiv:1803.08670(2018).Google ScholarGoogle Scholar
  54. Pilar Orero, Stephen Doherty, Jan-Louis Kruger, Anna Matamala, Jan Pedersen, Elisa Perego, Pablo Romero-Fresco, Sara Rovira-Esteva, Olga Soler-Vilageliu, and Agnieszka Szarkowska. 2018. Conducting experimental research in audiovisual translation (AVT): A position paper. JosTrans: The Journal of Specialised Translation30 (2018), 105–126.Google ScholarGoogle Scholar
  55. Rachel Sarah Osolen and Leah Brochu. 2020. Creating an Authentic Experience. The International Journal of Information, Diversity, & Inclusion (IJIDI) 4, 1(2020).Google ScholarGoogle Scholar
  56. Xiaoran Qin, Yafeng Zhou, Yonggang Li, Siwei Wang, Yongtao Wang, and Zhi Tang. 2019. Progressive deep feature learning for manga character recognition via unlabeled training data. In Proceedings of the ACM Turing Celebration Conference-China. 1–6.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Vipul Raheja and Joel Tetreault. 2019. Dialogue act classification with context-aware self-attention. arXiv preprint arXiv:1904.02594(2019).Google ScholarGoogle Scholar
  58. Frédéric Rayar. 2017. Accessible comics for visually impaired people: Challenges and opportunities. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), Vol. 3. IEEE, 9–14.Google ScholarGoogle ScholarCross RefCross Ref
  59. Frédéric Rayar, Bernard Oriola, and Christophe Jouffrais. 2020. ALCOVE: an accessible comic reader for people with low vision. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 410–418.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Kyle Rector, Keith Salmon, Dan Thornton, Neel Joshi, and Meredith Ringel Morris. 2017. Eyes-free art: Exploring proxemic audio interfaces for blind and low vision art engagement. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 3 (2017), 1–21.Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Kyle Reinholt, Darren Guinness, and Shaun K Kane. 2019. Eyedescribe: Combining eye gaze and speech to automatically create accessible touch screen artwork. In Proceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces. 101–112.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Christophe Rigaud. 2014. Segmentation and indexation of complex objects in comic book images. Ph.D. Dissertation. Université de La Rochelle.Google ScholarGoogle Scholar
  63. Christine Samson, Casey Fiesler, and Shaun K Kane. 2016. ” Holy Starches Batman!! We are Getting Walloped!” Crowdsourcing Comic Book Transcriptions. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. 289–290.Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Stefan Siersdorfer, Sergiu Chelaru, Wolfgang Nejdl, and Jose San Pedro. 2010. How useful are your comments? Analyzing and predicting YouTube comments and comment ratings. In Proceedings of the 19th international conference on World wide web. 891–900.Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Rachel Smythe. [n.d.]. Lore Olympus. Retrieved September 2, 2021 from https://www.webtoons.com/en/romance/lore-olympus/list?title_no=1320Google ScholarGoogle Scholar
  66. Rachel Smythe. [n.d.]. Lore Olympus. Retrieved August 30, 2021 from https://www.webtoons.com/en/romance/lore-olympus/list?title_no=1320Google ScholarGoogle Scholar
  67. Abigale J Stangl, Esha Kothari, Suyog D Jain, Tom Yeh, Kristen Grauman, and Danna Gurari. 2018. Browsewithme: An online clothes shopping assistant for people with visual impairments. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility. 107–118.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Raymond Hendy Susanto, Hai Leong Chieu, and Wei Lu. 2016. Learning to capitalize with character-level recurrent neural networks: an empirical study. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. 2090–2095.Google ScholarGoogle ScholarCross RefCross Ref
  69. tactile comics. [n.d.]. Life - A tactile comic for blind people | Philipp Meyer. Retrieved August 30, 2021 from https://www.hallo.pm/life/Google ScholarGoogle Scholar
  70. Carla Tamburro, Timothy Neate, Abi Roper, and Stephanie Wilson. 2020. Accessible Creativity with a Comic Spin. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–11.Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Garreth W Tigwell, Benjamin M Gorman, and Rachel Menzies. 2020. Emoji Accessibility for Visually Impaired People. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Web Accessibility Tutorials. [n.d.]. Images Concepts - Images - WAI. Retrieved September 9, 2021 from https://www.graphicaudiointernational.net/Google ScholarGoogle Scholar
  73. uru chan. [n.d.]. unOrdinary. Retrieved September 2, 2021 from https://www.webtoons.com/en/super-hero/unordinary/episode-223/viewer?title_no=679&episode_no=234Google ScholarGoogle Scholar
  74. Violeta Voykinska, Shiri Azenkot, Shaomei Wu, and Gilly Leshed. 2016. How blind people interact with visual content on social networking services. In Proceedings of the 19th acm conference on computer-supported cooperative work & social computing. 1584–1595.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Ruolin Wang, Zixuan Chen, Mingrui Ray Zhang, Zhaoheng Li, Zhixiu Liu, Zihan Dang, Chun Yu, and Xiang’Anthony’ Chen. 2021. Revamp: Enhancing Accessible Information Seeking Experience of Online Shopping for Blind or Low Vision Users. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Xinwei Wang, Jun Hu, Bart Hengeveld, and Matthias Rauterberg. 2018. Segmentation of panels in d-Comics. In Interactivity, Game Creation, Design, Learning, and Innovation. Springer, 28–37.Google ScholarGoogle Scholar
  77. Xinwei Wang, Jun Hu, Bart Hengeveld, and Matthias Rauterberg. 2019. Expressing segmentation in d-comics. In International Conference on Human-Computer Interaction. Springer, 402–409.Google ScholarGoogle ScholarCross RefCross Ref
  78. Shaomei Wu and Lada A Adamic. 2014. Visually impaired users on an online social network. In Proceedings of the sigchi conference on human factors in computing systems. 3133–3142.Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Shaomei Wu, Jeffrey Wieland, Omid Farivar, and Julie Schiller. 2017. Automatic alt-text: Computer-generated image descriptions for blind users on a social network service. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 1180–1192.Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Yaongyi. [n.d.]. True Beauty. Retrieved September 2, 2021 from https://www.webtoons.com/en/romance/truebeauty/list?title_no=1436Google ScholarGoogle Scholar
  81. Matin Yarmand, Dongwook Yoon, Samuel Dodson, Ido Roll, and Sidney S Fels. 2019. ” Can you believe [1: 21]?!” Content and Time-Based Reference Patterns in Video Comments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Peng Li, Maosong Sun, and Zhiyuan Liu. 2020. Coreferential reasoning learning for language representation. arXiv preprint arXiv:2004.06870(2020).Google ScholarGoogle Scholar
  83. Jeffrey M Zacks, Barbara Tversky, and Gowri Iyer. 2001. Perceiving, remembering, and communicating structure in events.Journal of experimental psychology: General 130, 1 (2001), 29.Google ScholarGoogle Scholar

Index Terms

  1. Cocomix: Utilizing Comments to Improve Non-Visual Webtoon Accessibility

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
        April 2022
        10459 pages
        ISBN:9781450391573
        DOI:10.1145/3491102

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 29 April 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate6,199of26,314submissions,24%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format