skip to main content
research-article
Public Access

Expanding a Large Inclusive Study of Human Listening Rates

Published: 21 July 2021 Publication History

Abstract

As conversational agents and digital assistants become increasingly pervasive, understanding their synthetic speech becomes increasingly important. Simultaneously, speech synthesis is becoming more sophisticated and manipulable, providing the opportunity to optimize speech rate to save users time. However, little is known about people’s abilities to understand fast speech. In this work, we provide an extension of the first large-scale study on human listening rates, enlarging the prior study run with 453 participants to 1,409 participants and adding new analyses on this larger group. Run on LabintheWild, it used volunteer participants, was screen reader accessible, and measured listening rate by accuracy at answering questions spoken by a screen reader at various rates. Our results show that people who are visually impaired, who often rely on audio cues and access text aurally, generally have higher listening rates than sighted people. The findings also suggest a need to expand the range of rates available on personal devices. These results demonstrate the potential for users to learn to listen to faster rates, expanding the possibilities for human-conversational agent interaction.

References

[1]
NV Access. 2017. NVDA 2017. Retrieved September 2, 2017 from http://www.nvaccess.org/.
[2]
Gerry T. M. Altmann (Ed.). 1995. Cognitive Models of Speech Processing: Psycholinguistic and Computational Perspectives. MIT Press.
[3]
Amir Amedi. 2004. Visual and Multisensory Processing and Plasticity in the Human Brain. Ph.D. Dissertation. Hebrew University, Jerusalem, Israel.
[4]
Shumin An, Zhenhua Ling, and Lirong Dai. 2017. Emotional statistical parametric speech synthesis using LSTM-RNNs. In Proceedings of the 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 1613–1616.
[5]
Apple. 2017. VoiceOver. http://www.apple.com/accessibility/mac/vision/. (Accessed 2017-09-02).
[6]
Chieko Asakawa, Hironobu Takagi, Shuichi Ino, and Tohru Ifukube. 2003. Maximum listening speeds for the blind. In Proc. of the International Conference on Auditory Display (ICAD). 276–279.
[7]
Tal August and Katharina Reinecke. 2019. Pay attention, please: Formal language improves attention in volunteer and paid online experiments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–11.
[8]
Marialena Barouti, Konstantinos Papadopoulos, and Georgios Kouroupetroglou. 2013. Synthetic and natural speech intelligibility in individuals with visual impairments: Effects of experience and presentation rate. In Proceedings of the European AAATE Conference. Vilamoura, Portugal, 695–699.
[9]
Jeffrey P. Bigham, Anna C. Cavender, Jeremy T. Brudvik, Jacob O. Wobbrock, and Richard E. Ladner. 2007. WebinSitu: A comparative analysis of blind and sighted browsing behavior. In Proceedings of the International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS). ACM, 51–58.
[10]
Dan Bilefsky. 2007. In Fight Against Terror, Keen Ears Undistracted by Sight. http://www.nytimes.com/2007/11/17/world/europe/17vanloo.html?mcubz=1.
[11]
Alan Black and Nick Campbell. 1995. Optimising selection of units from speech databases for concatenative synthesis. European Speech Communication Association (ESCA), Madrid, Spain, 581–584.
[12]
Yevgen Borodin, Jeffrey P. Bigham, Glenn Dausch, and I. V. Ramakrishnan. 2010. More than meets the eye: A survey of screen-reader browsing strategies. In Proceedings of the International Cross-Disciplinary Conference on Web Accessibility (W4A). ACM, Article 13. 1–10.
[13]
Danielle Bragg, Cynthia Bennett, Katharina Reinecke, and Richard Ladner. 2018. A large inclusive study of human listening rates. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–12.
[14]
Dasom Choi, Daehyun Kwak, Minji Cho, and Sangsu Lee. 2020. “Nobody speaks that fast!” An empirical study of speech rate in conversational agents for people with vision impairments. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, USA, 1–13.
[15]
Ronald A. Cole and Jola Jakimik. 1980. A model of speech perception. Perception and Production of Fluent Speech (1980), 133–163.
[16]
Juliet Corbin, Anselm Strauss, et al. 2008. Basics of qualitative research: Techniques and procedures for developing grounded theory. (2008).
[17]
Delphine Dahan. 2010. The time course of interpretation in speech comprehension. Current Directions in Psychological Science 19, 2 (2010), 121–126.
[18]
Çağatay Demiralp, Michael S. Bernstein, and Jeffrey Heer. 2014. Learning perceptual kernels for visualization design. IEEE Transactions on Visualization and Computer Graphics 20, 12 (2014), 1933–1942.
[19]
Larisa Dunai, Ismael Lengua, Guillermo Peris-Fajarnés, and Fernando Brusola. 2015. Virtual sound localization by blind people. Archives of Acoustics 40, 4 (2015), 561–567.
[20]
Robert M. Emerson, Rachel I. Fretz, and Linda L. Shaw. 2011. Writing Ethnographic Fieldnotes. University of Chicago Press.
[21]
Emerson Foulke and Thomas G. Sticht. 1969. Review of research on the intelligibility and comprehension of accelerated speech.Psychological Bulletin 72, 1 (1969), 50–62.
[22]
M. Furmankiewicz, A. Sołtysik-Piorunkiewicz, and P. Ziuziański. 2014. Artificial intelligence systems for knowledge management in E-health: The study of intelligent software agents. Latest Trends on Systems 2 (2014), 551–556.
[23]
Laura Germine, Ken Nakayama, Bradley C. Duchaine, Christopher F. Chabris, Garga Chatterjee, and Jeremy B. Wilmer. 2012. Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments. Psychonomic Bulletin & Review 19, 5 (2012), 847–857.
[24]
Google. 2017. ChromeVox Version 52. http://www.chromevox.com/. (Accessed 2017-09-02).
[25]
Google. 2017. TalkBack. Retrieved September 3, 2017 from http://play.google.com/store/apps/details?id=com.google.android.marvin.talkback&hl=en.
[26]
Frédéric Gougoux, Franco Lepore, Maryse Lassonde, Patrice Voss, Robert J. Zatorre, and Pascal Belin. 2004. Neuropsychology: Pitch discrimination in the early blind. Nature 430 (2004), 309–310.
[27]
Frédéric Gougoux, Robert J. Zatorre, Maryse Lassonde, Patrice Voss, and Franco Lepore. 2005. A functional neuroimaging study of sound localization: Visual cortex activity predicts performance in early-blind individuals. PLoS Biol. 3, 2 (2005), e27.
[28]
Arthur C. Graesser. 2016. Conversations with AutoTutor help students learn. Int. J. Artif. Intell. Educ. 26, 1 (2016), 124–132.
[29]
João Guerreiro and Daniel Gonçalves. 2015. Faster text-to-speeches: Enhancing blind people’s information scanning with faster concurrent speech. In Proceedings of the International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS’15). ACM, 3–11.
[30]
Roy H. Hamilton, Alvaro Pascual-Leone, and Gottfried Schlaug. 2004. Absolute pitch in blind musicians. Neuroreport 15, 5 (2004), 803–806.
[31]
Jeffrey Heer and Michael Bostock. 2010. Crowdsourcing graphical perception: Using mechanical turk to assess visualization design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10). ACM, 203–212.
[32]
John J. Horton, David G. Rand, and Richard J. Zeckhauser. 2011. The online labyoratory: Conducting experiments in a real labor market. Exp. Econ. 14, 3 (2011), 399–425.
[33]
Kirsten Hötting and Brigitte Röder. 2009. Auditory and auditory-tactile processing in congenitally blind humans. Hear. Res. 258, 1 (2009), 165–174.
[34]
Arthur S. House, Carl Williams, Michael H. L. Hecker, and Karl D. Kryter. 1963. Psychoacoustic speech tests: A modified rhyme test. J. Acoust. Soc. Am. 35, 11 (1963), 1899–1899.
[35]
Tim Hull and Heather Mason. 1995. Performance of blind children on digit-span tests. J. Vis. Impair. Blind. 89, 2 (1995), 166–169.
[36]
Akemi Iida, Nick Campbell, Fumito Higuchi, and Michiaki Yasumura. 2003. A corpus-based speech synthesis system with emotion. Speech Commun. 40, 1–2 (2003), 161–187.
[37]
Kenzo Ishizaka and James L. Flanagan. 1972. Synthesis of voiced sounds from a two-mass model of the vocal cords. Bell Labs Techn. J. 51, 6 (1972), 1233–1268.
[38]
Andrew J. Kolarik, Silvia Cirstea, Shahina Pardhan, and Brian C. J. Moore. 2014. A summary of research investigating echolocation abilities of blind and sighted humans. Hear. Res. 310 (2014), 60–68.
[39]
Walter S. Lasecki, Raja Kushalnagar, and Jeffrey P. Bigham. 2014. Legion scribe: Real-time captioning by non-experts. In Proceedings of the International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS’14). ACM, 303–304.
[40]
Jonathan Lazar, Aaron Allen, Jason Kleinman, and Chris Malarkey. 2007. What frustrates screen reader users on the web: A study of 100 blind users. Int. J. Hum.-Comput. Interact. 22, 3 (2007), 247–269.
[41]
Qisheng Li, Krzysztof Z. Gajos, and Katharina Reinecke. 2018. Volunteer-based online studies with older adults and people with disabilities. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS’18). Association for Computing Machinery, New York, NY, 229–241.
[42]
Qisheng Li, Sung Jun Joo, Jason D. Yeatman, and Katharina Reinecke. 2020. controlling for participantsâ. viewing distance in large-scale, psychophysical online experiments using a virtual chinrest. Sci. Rep. 10, 1 (2020), 1–11.
[43]
William D. Marslen-Wilson. 1987. Functional parallelism in spoken word-recognition. Cognition 25, 1 (1987), 71–102.
[44]
William D. Marslen-Wilson and Alan Welsh. 1978. Processing interactions and lexical access during word recognition in continuous speech. Cogn. Psychol. 10, 1 (1978), 29–63.
[45]
James L. McClelland and Jeffrey L. Elman. 1986. The TRACE model of speech perception. Cogn. Psychol. 18, 1 (1986), 1–86.
[46]
Chris McKinstry, Rick Dale, and Michael J. Spivey. 2008. Action dynamics reveal parallel competition in decision making. Psychol. Sci. 19, 1 (2008), 22–24.
[47]
Helena Merriman. 2016. The Blind Boy Who Learned to See with Sound. Retrieved from http://www.bbc.com/news/disability-35550768.
[48]
G. W. Micro. 2017. Window-Eyes. September 2, 2017 http://www.gwmicro.com/Window-Eyes/.
[49]
Norman Miller, Geoffrey Maruyama, Rex J. Beaber, and Keith Valone. 1976. Speed of speech and persuasion. J. Pers. Soc. Psychol. 34, 4 (1976), 615–624.
[50]
Anja Moos and Jürgen Trouvain. 2007. Comprehension of ultra-fast speech–blind vs. “normally hearing” persons. In Proceedings of the International Congress of Phonetic Sciences (ICPhS’07). Saarland University Saarbrücken, Germany, 677–680.
[51]
Eric Moulines and Francis Charpentier. 1990. Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones. Speech Commun. 9, 5–6 (1990), 453–467.
[52]
NYU. 2015. Beyond Braille: A History of Reading by Ear. Retrieved from http://www.nyu.edu/about/news-publications/news/2015/january/mara-mills-blind-reading.html.
[53]
Gabriele Paolacci, Jesse Chandler, and Panagiotis G. Ipeirotis. 2010. Running experiments on Amazon mechanical turk. Judg. Decis. Mak. 5, 5 (2010), 411–419.
[54]
Konstantinos Papadopoulos and Eleni Koustriava. 2015. Comprehension of synthetic and natural speech: Differences among Sighted and visually impaired young adults. Proceedings of the International Conference on Enabling Access for Persons with Visual Impairment (ICEAPVI’15), 147–151.
[55]
Donatella Pascolini and Silvio Paolo Mariotti. 2012. Global estimates of visual impairment: 2010. Br. J. Ophthalmol. 96, 5 (2012), 614–618.
[56]
Ville Pulkki and Matti Karjalainen. 2015. Communication Acoustics: An Introduction to Speech, Audio and Psychoacoustics. John Wiley & Sons.
[57]
Katharina Reinecke and Krzysztof Z. Gajos. 2015. LabintheWild: Conducting large-scale online experiments with uncompensated samples. In Proceedings of the ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW’15). ACM, 1364–1378.
[58]
Brigitte Röder, Lisa Demuth, Judith Streb, and Frank Rösler. 2003. Semantic and morpho-syntactic priming in auditory word recognition in congenitally blind adults. Lang. Cogn. Process. 18, 1 (2003), 1–20.
[59]
Brigitte Röder and Frank Rösler. 2003. Memory for environmental sounds in sighted, congenitally blind and late blind adults: Evidence for cross-modal compensation. Int. J. Psychophysiol. 50, 1 (2003), 27–39.
[60]
Brigitte Röder, Frank Rösler, Erwin Hennighausen, and Fritz Näcker. 1996. Event-related potentials during auditory and somatosensory discrimination in sighted and blind human subjects. Cogn. Brain Res. 4, 2 (1996), 77–93.
[61]
Brigitte Röder, Oliver Stock, Siegfried Bien, Helen Neville, and Frank Rösler. 2002. Speech processing activates visual cortex in congenitally blind humans. Eur. J. Neurosci. 16, 5 (2002), 930–936.
[62]
Brigitte Roder, Wolfgang Teder-Salejarvi, Anette Sterr, Frank Rosler, et al. 1999. Improved auditory spatial tuning in blind humans. Nature 400, 6740 (1999), 162.
[63]
David A. Ross, Ingrid R. Olson, and John C. Gore. 2003. Cortical plasticity in an early blind musician: An fMRl study. Magn. Reson. Imag. 21, 7 (2003), 821–828.
[64]
Marc Schröder. 2001. Emotional speech synthesis: A review. In Proceedings of the 7th European Conference on Speech Communication and Technology.
[65]
Diemo Schwarz, Grégory Beller, Bruno Verbrugghe, and Sam Britton. 2006. Real-time corpus-based concatenative synthesis with CataRT. In Proceedings of the International Conference on Digital Audio Effects (DAFx’06). 279–282.
[66]
Freedom Scientific. 2006. JAWS 18. Retrieved September 2, 2017 from http://www.freedomscientific.com/.
[67]
Celia Scully. 1990. Articulatory synthesis. In Speech Production and Speech Modelling. Springer, 151–186.
[68]
Christine H. Shadle and Robert I. Damper. 2001. Prospects for articulatory synthesis: A position paper. In Proceedings of the 4th ISCA Tutorial and Research Workshop (ITRW’01) on Speech Synthesis.
[69]
Claire Soares. 2007. Move over Poirot: Belgium Recruits Blind Detectives to Help Fight Crime. Retrieved from http://www.independent.co.uk/news/world/europe/move-over-poirot-belgium-recruits-blind-detectives-to-help-fight-crime-5337339.html.
[70]
Jose Sotelo, Soroush Mehri, Kundan Kumar, Joao Felipe Santos, Kyle Kastner, Aaron Courville, and Yoshua Bengio. 2017. Char2wav: End-to-end speech synthesis. In Proceedings of the International Conference on Learning Representations (ICLR’17) Workshop Submission. Retrieved from https://openreview.net/forum?id=B1VWyySKx.
[71]
Amanda Stent, Ann Syrdal, and Taniya Mishra. 2011. On the intelligibility of fast synthesized speech for individuals with early-onset blindness. In Proceedings of the International ACM SIGACCESS Conference on Computers & Accessibility (ASSETS’11). ACM, 211–218.
[72]
Santani Teng, Amrita Puri, and David Whitney. 2012. Ultrafine spatial acuity of blind expert human echolocators. Exp. Brain Res. 216, 4 (2012), 483–488.
[73]
Hugo Théoret, Lotfi Merabet, and Alvaro Pascual-Leone. 2004. Behavioral and neuroplastic changes in the blind: Evidence for functionally relevant cross-modal interactions. J. Physiol.-Par. 98, 1 (2004), 221–233.
[74]
Tomoki Toda and Keiichi Tokuda. 2007. A speech parameter generation algorithm considering global variance for HMM-based speech synthesis. IEICE Trans. Inf. Syst. 90, 5 (2007), 816–824.
[75]
Keiichi Tokuda, Takayoshi Yoshimura, Takashi Masuko, Takao Kobayashi, and Tadashi Kitamura. 2000. Speech parameter generation algorithms for HMM-based speech synthesis. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 3. IEEE, 1315–1318.
[76]
Jürgen Trouvain. 2007. On the comprehension of extremely fast synthetic speech. Saarland Working Papers in Linguistics, Vol. 1, 5–13.
[77]
Aditya Vashistha, Pooja Sethi, and Richard Anderson. 2017. Respeak: A voice-based, crowd-powered speech transcription system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’17). ACM, 1855–1866.
[78]
Patrice Voss, Maryse Lassonde, Frederic Gougoux, Madeleine Fortin, Jean-Paul Guillemot, and Franco Lepore. 2004. Early- and late-onset blind individuals show supra-normal auditory abilities in far-space. Curr. Biol. 14, 19 (2004), 1734–1738.
[79]
Catherine Y. Wan, Amanda G. Wood, David C. Reutens, and Sarah J. Wilson. 2010. Early but not late-blindness leads to enhanced auditory perception. Neuropsychologia 48, 1 (2010), 344–348.
[80]
Robert Weeks, Barry Horwitz, Ali Aziz-Sultan, Biao Tian, C. Mark Wessinger, Leonardo G. Cohen, Mark Hallett, and Josef P. Rauschecker. 2000. A positron emission tomographic study of auditory localization in the congenitally blind. J. Neurosci. 20, 7 (2000), 2664–2672.
[81]
Teng Ye, Katharina Reinecke, and Lionel P. Robert Jr. 2017. Personalized feedback versus money: The effect on reliability of subjective data in online experimental platforms. In Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 343–346.
[82]
Takayoshi Yoshimura, Keiichi Tokuda, Takashi Masuko, Takao Kobayashi, and Tadashi Kitamura. 1999. Simultaneous modeling of spectrum, pitch and duration in HMM-based speech synthesis. In Proceedings of the 6th European Conference on Speech Communication and Technology.
[83]
Heiga Ze, Andrew Senior, and Mike Schuster. 2013. Statistical parametric speech synthesis using deep neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 7962–7966.
[84]
Kathryn Zyskowski, Meredith Ringel Morris, Jeffrey P. Bigham, Mary L. Gray, and Shaun K. Kane. 2015. Accessible crowdwork? Understanding the value in and challenge of microtask employment for people with disabilities. In Proceedings of the ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW’15). ACM, 1682–1693.

Cited By

View all
  • (2024)Why is Accessibility So Hard? Insights From the History of PrivacyCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681876(362-368)Online publication date: 11-Nov-2024
  • (2024)Audio Description CustomizationProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675617(1-19)Online publication date: 27-Oct-2024
  • (2024)Training Adults with Mild to Moderate Dementia in ChatGPT: Exploring Best PracticesCompanion Proceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640544.3645230(101-106)Online publication date: 18-Mar-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Accessible Computing
ACM Transactions on Accessible Computing  Volume 14, Issue 3
September 2021
199 pages
ISSN:1936-7228
EISSN:1936-7236
DOI:10.1145/3477232
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 21 July 2021
Accepted: 01 April 2021
Revised: 01 April 2021
Received: 01 December 2020
Published in TACCESS Volume 14, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Synthetic speech
  2. accessibility
  3. blind
  4. crowdsourcing
  5. human abilities
  6. listening rate
  7. low-vision
  8. visually impaired

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

  • NSF

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)305
  • Downloads (Last 6 weeks)51
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Why is Accessibility So Hard? Insights From the History of PrivacyCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3681876(362-368)Online publication date: 11-Nov-2024
  • (2024)Audio Description CustomizationProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675617(1-19)Online publication date: 27-Oct-2024
  • (2024)Training Adults with Mild to Moderate Dementia in ChatGPT: Exploring Best PracticesCompanion Proceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640544.3645230(101-106)Online publication date: 18-Mar-2024
  • (2024)Uncovering Human Traits in Determining Real and Spoofed Audio: Insights from Blind and Sighted IndividualsProceedings of the CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642817(1-14)Online publication date: 11-May-2024
  • (2024)Modality Synchronization When People With Aphasia Read With Text-to-Speech SupportAmerican Journal of Speech-Language Pathology10.1044/2024_AJSLP-23-0033433:3(1504-1512)Online publication date: May-2024
  • (2023)“Let the Volcano Erupt!”: Designing Sonification to Make Oceanography Accessible for Blind and Low Vision Students in Museum EnvironmentProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3614482(1-6)Online publication date: 22-Oct-2023
  • (2023)The Sem-Lex Benchmark: Modeling ASL Signs and their PhonemesProceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3597638.3608408(1-10)Online publication date: 22-Oct-2023
  • (2023)Interactive description to enhance accessibility and experience of deaf and hard-of-hearing individuals in museumsUniversal Access in the Information Society10.1007/s10209-023-00983-223:2(913-926)Online publication date: 1-Mar-2023
  • (2022)Understanding Emerging Obfuscation Technologies in Visual Description Services for Blind and Low Vision PeopleProceedings of the ACM on Human-Computer Interaction10.1145/35555706:CSCW2(1-33)Online publication date: 11-Nov-2022
  • (2022)Assistive or Artistic Technologies? Exploring the Connections between Art, Disability and Wheelchair UseProceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3517428.3544799(1-14)Online publication date: 23-Oct-2022
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media