skip to main content
10.1145/3686038.3686066acmotherconferencesArticle/Chapter ViewAbstractPublication PagestasConference Proceedingsconference-collections
research-article
Open access

When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems

Published: 16 September 2024 Publication History

Abstract

Explanations are believed to aid understanding of AI models, but do they affect users’ perceptions and trust in AI, especially in the presence of algorithmic bias? If so, when should explanations be provided to optimally balance explainability and usability? To answer these questions, we conducted a user study (N = 303) exploring how explanation timing influences users’ perception of trust calibration, understanding of the AI system, and user experience and user interface satisfaction under both biased and unbiased AI performance conditions. We found that pre-explanations seem most valuable when the AI shows bias in its performance, whereas post-explanations appear more favorable when the system is bias-free. Showing both pre-and post-explanations tends to result in higher perceived trust calibration regardless of bias, despite concerns about content redundancy. Implications for designing socially responsible, explainable, and trustworthy AI interfaces are discussed.

References

[1]
Amina Adadi and Mohammed Berrada. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6, (2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
[2]
Qingyao Ai and Lakshmi Narayanan Ramaswamy. 2021. Model-agnostic vs. model-intrinsic interpretability for explainable product search. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, October 26, 2021. ACM, Virtual Event Queensland Australia, 5–15. https://doi.org/10.1145/3459637.3482276
[3]
Ariful Islam Anik and Andrea Bunt. 2021. Data-centric explanations: Explaining training data of machine learning systems to promote transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, May 06, 2021. ACM, Yokohama Japan, 1–13. https://doi.org/10.1145/3411764.3445736
[4]
Alistair Barr. 2015. Google mistakenly tags black people as ‘gorillas,’ showing limits of algorithms. The Wall Street Journal 1, 7 (2015), 2015. Retrieved from https://www.wsj.com/articles/BL-DGB-42522
[5]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58, (June 2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
[6]
Sebastian Bordt, Michèle Finck, Eric Raidl, and Ulrike Von Luxburg. 2022. Post-hoc explanations fail to achieve their purpose in adversarial contexts. In 2022 ACM Conference on Fairness, Accountability, and Transparency, June 21, 2022. ACM, Seoul Republic of Korea, 891–905. https://doi.org/10.1145/3531146.3533153
[7]
John Brooke. 1996. SUS-A quick and dirty usability scale. Usability Evaluation in Industry 189, 194 (1996), 4–7. Retrieved from www.TBIStaffTraining.info
[8]
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. 2018 Conference on Fairness, Accountability, and Transparency 81 (2018), 1–15. Retrieved from https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
[9]
Judee K. Burgoon. 1993. Interpersonal expectations, expectancy violations, and emotional communication. Journal of Language and Social Psychology 12, 1–2 (1993), 30–48. https://doi-org.elon.idm.oclc.org/10.1177/0261927X93121003
[10]
Cheng Chen. 2022. Communicating racial bias in AI algorithms: Effects of training data diversity and user feedback on AI trust. Pennsylvania State University, State College, PA, USA.
[11]
Cheng Chen, DiRusso Carlina, Yang Hyun, Shao Ruosi, Krieger Michael, and S. Shyam Sundar. 2019. Alexa, Netflix and Siri: User perceptions of AI-driven technologies. In The 102nd annual conference of Association for Education in Journalism and Mass Communication, August 7-10, 2019. Toronto, Canada.
[12]
Cheng Chen, Eunchae Jang, and S. Shyam Sundar. 2024. Racial bias in AI: How does training data representativeness affect layperson perceptions of fairness? In the 74th Annual Conference of International Communication Association, June 20-24, 2024. Gold Coast, Australia.
[13]
Irene Y Chen, Fredrik D Johansson, and David Sontag. 2018. Why is my classifier discriminatory? In Proceedings of the 32nd International Conference on Neural Information Processing Systems, December 2018. Montréal, Canada., 3543–3554. https://doi.org/10.5555/3327144.3327272
[14]
You Chen, Ellen Wright Clayton, Laurie Lovett Novak, Shilo Anders, and Bradley Malin. 2023. Human-centered design to address biases in artificial intelligence. J Med Internet Res 25, (March 2023), e43251. https://doi.org/10.2196/43251
[15]
John P. Chin, Virginia A. Diehl, and Kent L. Norman. 1988. Development of an instrument measuring user satisfaction of the human-computer interface. In Proceedings of the SIGCHI conference on Human factors in computing systems, 1988. 213–218. https://doi.org/10.1145/57167.57203
[16]
David Danks and Alex John London. 2017. Algorithmic bias in autonomous systems. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, August 19, 2017. Melbourne, Australia, 4691–4697. Retrieved from https://doi.org/10.24963/ijcai.2017/654
[17]
Fred D. Davis. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13, 3 (1989), 205–219. Retrieved from https://doi-org.elon.idm.oclc.org/10.2307/249008
[18]
Boris Delibasic, Milan Vukicevic, and MILO Jovanovic. 2013. White-box decision tree algorithms: A pilot study on perceived usefulness, perceived ease of use, and perceived understanding. International Journal of Engineering Education 29, 3 (2013), 674–687. Retrieved from https://rfos.fon.bg.ac.rs/handle/123456789/1161
[19]
Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th international conference on intelligent user interfaces, 2019. 275–285. https://doi.org/10.1145/3301275.3302310
[20]
Motahhare Eslami, Kristen Vaccaro, Karrie Karahalios, and Kevin Hamilton. 2017. “Be careful; Things can be worse than they appear”—Understanding biased algorithms and users’ behavior around them in rating platforms. May 15, 2017. Montréal, Québec, Canada. https://doi.org/10.1609/icwsm.v11i1.14898
[21]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. 2021. Datasheets for datasets. Communications of the ACM 64, 12 (2021), 86–92. https://doi.org/10.1145/3458723
[22]
Anne Geraci. 1991. IEEE standard computer dictionary: Compilation of IEEE standard computer glossaries. IEEE Press.
[23]
David Gunning and David W Aha. 2019. DARPA's explainable artificial intelligence program. AI Magazine 40, 2 (2019), 44–58. Retrieved from https://www.darpa.mil/program/explainable-artificial-intelligence
[24]
Marc Hassenzahl and Noam Tractinsky. 2006. User experience-a research agenda. Behaviour & information technology 25, 2 (2006), 91–97. Retrieved from https://doi.org/10.1080/01449290500330331
[25]
Andrew F. Hayes. 2017. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (Third Edition ed.). Guilford Press, New York.
[26]
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 03, 2021. ACM, Virtual Event Canada, 624–635. https://doi.org/10.1145/3442188.3445923
[27]
Sérgio Jesus, Catarina Belém, Vladimir Balayan, João Bento, Pedro Saleiro, Pedro Bizarro, and João Gama. 2021. How can I choose an explainer? An application-grounded evaluation of post-hoc explanations. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 03, 2021. ACM, Virtual Event Canada, 805–815. https://doi.org/10.1145/3442188.3445941
[28]
Nathan Kallus and Angela Zhou. 2018. Residual unfairness in fair machine learning from prejudiced data. In International Conference on Machine Learning, 2018. PMLR, 2439–2448. Retrieved from https://doi.org/10.48550/arXiv.1806.02887
[29]
Patric Gage Kelley and Allison Woodruff. 2023. Advancing explainability through AI literacy and design resources. Interactions (2023), 34–38. Retrieved from https://interactions.acm.org/archive/view/september-october-2023/advancing-explainability-through-ai-literacy-and-design-resources
[30]
Eoin M. Kenny, Courtney Ford, Molly Quinn, and Mark T. Keane. 2021. Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artificial Intelligence 294, (May 2021), 103459. https://doi.org/10.1016/j.artint.2021.103459
[31]
Sunnie S. Y. Kim, Elizabeth Anne Watkins, Olga Russakovsky, Ruth Fong, and Andrés Monroy-Hernández. 2023. “Help me help the AI”: Understanding how explainability can support human-AI interaction. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, April 19, 2023. ACM, Hamburg Germany, 1–17. https://doi.org/10.1145/3544548.3581001
[32]
René F. Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI conference on human factors in computing systems, 2016. 2390–2395. https://doi.org/10.1145/2858036.2858402
[33]
Melina Klepsch, Florian Schmitz, and Tina Seufert. 2017. Development and validation of two instruments measuring intrinsic, extraneous, and germane cognitive load. Frontiers in psychology 8, (2017), 1997. Retrieved from https://doi.org/10.3389/fpsyg.2017.01997
[34]
Moritz Körber, Lorenz Prasch, and Klaus Bengler. 2018. Why do I have to drive now? Post hoc explanations of takeover requests. Human Factors (2018). Retrieved from https://doi.org/10.1177/0018720817747730
[35]
Minae Kwon, Sandy H. Huang, and Anca D. Dragan. 2018. Expressing robot incapability. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018. 87–95. https://doi.org/10.1145/3171221.3171276
[36]
Himabindu Lakkaraju and Osbert Bastani. 2020. “How do I fool you?”: Manipulating user trust via misleading black box explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, February 07, 2020. ACM, New York NY USA, 79–85. https://doi.org/10.1145/3375627.3375833
[37]
Carine Lallemand, Guillaume Gronier, and Vincent Koenig. 2015. User experience: A concept without consensus? Exploring practitioners’ perspectives through an international survey. Computers in human behavior 43, (2015), 35–48. Retrieved from https://doi.org/10.1016/j.chb.2014.10.048
[38]
Annie Lang. 2009. The limited capacity model of motivated mediated message processing. The SAGE handbook of media processes and effects (2009), 193–204. Retrieved from https://doi.org/10.1002/9781118783764.wbieme0077
[39]
John D. Lee and Katrina A. See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
[40]
James R. Lewis. 1995. Computer system usability questionnaire. International Journal of Human-Computer Interaction (1995). Retrieved from https://doi.org/10.1037/t32698-000
[41]
Q. Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, April 21, 2020. ACM, Honolulu HI USA, 1–15. https://doi.org/10.1145/3313831.3376590
[42]
Q. Vera Liao and S. Shyam Sundar. 2022. Designing for responsible trust in AI systems: A communication perspective. In 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022. Association for Computing Machinery, Seoul Republic of Korea, 1257–1268. https://doi.org/10.1145/3531146.3533182
[43]
Zachary C. Lipton. 2018. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 3 (2018), 31–57. https://doi.org/10.1145/3236386.3241340
[44]
Arnold M. Lund. 2001. Measuring usability with the use questionnaire. Usability interface 8, 2 (2001), 3–6.
[45]
Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, and Katrien Verbert. 2019. To explain or not to explain: The effects of personal characteristics when explaining music recommendations. In Proceedings of the 24th International Conference on Intelligent User Interfaces, March 17, 2019. ACM, Marina del Ray California, 397–407. https://doi.org/10.1145/3301275.3302313
[46]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, 2019. 220–229. https://doi.org/10.1145/3287560.3287596
[47]
Jennifer L. Monahan, Sheila T. Murphy, and Robert B Zajonc. 2000. Subliminal mere exposure: Specific, general, and diffuse effects. Psychological Science 11, 6 (November 2000), 462–466. https://doi.org/10.1111/1467-9280.00289
[48]
Wesley G. Moons and Diane M. Mackie. 2007. Thinking straight while seeing red: The influence of anger on information processing. Pers Soc Psychol Bull 33, 5 (May 2007), 706–720. https://doi.org/10.1177/0146167206298566
[49]
Leonardo Nicoletti and Dina Bass. 2023. Humans are biased. Generative AI is even worse: Stable diffusion's text-to-image model amplifies stereotypes about race and gender - here's why that matters. Bloomberg. Retrieved from https://www.bloomberg.com/graphics/2023-generative-ai-bias/
[50]
Stefanos Nikolaidis, Swaprava Nath, Ariel D. Procaccia, and Siddhartha Srinivasa. 2017. Game-theoretic modeling of human adaptation in human-robot collaboration. In Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction, 2017. 323–331. https://doi.org/10.1145/2909824.3020253
[51]
Eirini Ntoutsi, Pavlos Fafalios, Ujwal Gadiraju, Vasileios Iosifidis, Wolfgang Nejdl, Maria‐Esther Vidal, Salvatore Ruggieri, Franco Turini, Symeon Papadopoulos, Emmanouil Krasanakis, Ioannis Kompatsiaris, Katharina Kinder‐Kurlanda, Claudia Wagner, Fariba Karimi, Miriam Fernandez, Harith Alani, Bettina Berendt, Tina Kruegel, Christian Heinze, Klaus Broelemann, Gjergji Kasneci, Thanassis Tiropanis, and Steffen Staab. 2020. Bias in data‐driven artificial intelligence systems—An introductory survey. WIREs Data Mining Knowl Discov 10, 3 (May 2020). https://doi.org/10.1002/widm.1356
[52]
Daniel Omeiza, Konrad Kollnig, Helena Web, Marina Jirotka, and Lars Kunze. 2021. Why Not explain? Effects of explanations on human perceptions of autonomous driving. In 2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO), July 08, 2021. IEEE, Tokoname, Japan, 194–199. https://doi.org/10.1109/ARSO51874.2021.9542835
[53]
Raja Parasuraman and Dietrich H. Manzey. 2010. Complacency and bias in human use of automation: An attentional integration. Human Factors 52, 3 (June 2010), 381–410. https://doi.org/10.1177/0018720810376055
[54]
Sebastian A. C. Perrig, Nicolas Scharowski, and Florian Brühlmann. 2023. Trust issues with trust scales: Examining the psychometric quality of trust measures in the context of AI. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, April 19, 2023. ACM, Hamburg Germany, 1–7. https://doi.org/10.1145/3544549.3585808
[55]
Artemio Ramirez Jr and Zuoming Wang. 2008. When online meets offline: An expectancy violations theory perspective on modality switching. Journal of Communication 58, 1 (2008), 20–39. https://doi.org/10.1111/j.1460-2466.2007.00372.x
[56]
Paul Robinette, Wenchen Li, Robert Allen, Ayanna M. Howard, and Alan R. Wagner. 2016. Overtrust of robots in emergency evacuation scenarios. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), March 2016. IEEE, Christchurch, New Zealand, 101–108. https://doi.org/10.1109/HRI.2016.7451740
[57]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Retrieved April 27, 2023 from http://arxiv.org/abs/1811.10154
[58]
Wojciech Samek and Klaus-Robert Müller. 2019. Towards explainable artificial intelligence. In Explainable AI: interpreting, explaining and visualizing deep learning. Springer, 5–22. Retrieved from https://doi.org/10.1007/978-3-030-28954-6_1
[59]
Ben Schneiderman. 2021. Human-centered AI. In Panel discussion at the 2021 CHI Conference on Human Factors in Computer Systems. Yokohama, Japan.
[60]
Sarah Strohkorb Sebo, Margaret Traeger, Malte Jung, and Brian Scassellati. 2018. The ripple effects of vulnerability: The effects of a robot's vulnerable behavior on trust in human-robot teams. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018. 178–186. https://doi.org/10.1145/3171221.3171275
[61]
Yuan Sun, Mengqi Liao, B. Joseph Walther, and S. Shyam Sundar. 2022. Does transparency matter when the AI system meets performance expectations? An experiment with an online dating site. In The 72nd Annual Conference of International Communication Association. May 26-30, 2022. Paris, France.
[62]
Yuan Sun and S. Shyam Sundar. 2022. Exploring the effects of interactive dialogue in improving user control for explainable online symptom checkers. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, 2022. 1–7. https://doi.org/10.1145/3491101.3519668
[63]
Yuan Sun and S Shyam Sundar. 2024. How an AI bot looks can be deceiving: Using explanation affordance to help users calibrate their trust. June 20-24, 2024. Gold Coast, Australia.
[64]
S. Shyam Sundar. 2008. The MAIN model: A heuristic approach to understanding technology effects on credibility. In Digital media, youth, and credibility. The MIT Press, Cambridge, MA, 73–100. Retrieved from https://library.oapen.org/bitstream/handle/20.500.12657/26088/1003998.pdf
[65]
S. Shyam Sundar. 2020. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (2020), 74–88. https://doi.org/10.1093/jcmc/zmz026
[66]
S. Shyam Sundar, Jinyoung Kim, Mary Beth Rosson, and Maria D. Molina. 2020. Online privacy heuristics that predict information disclosure. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020. 1–12. https://doi.org/10.1145/3313831.3376854
[67]
S. Shyam Sundar and Sampada S. Marathe. 2010. Personalization versus customization: The importance of agency, privacy, and power usage. Human communication research 36, 3 (2010), 298–322. https://doi.org/10.1111/j.1468-2958.2010.01377.x
[68]
S. Shyam Sundar, Jeeyun Oh, Saraswathi Bellur, Haiyan Jia, and Hyang-Sook Kim. 2012. Interactivity as self-expression: A field experiment with customization and blogging. In Proceedings of the SIGCHI conference on human factors in computing systems, 2012. 395–404. Retrieved from https://doi.org/10.1145/2207676.2207731
[69]
Margaret L. Traeger, Sarah Strohkorb Sebo, Malte Jung, Brian Scassellati, and Nicholas A. Christakis. 2020. Vulnerable robots positively shape human conversational dynamics in a human–robot team. Proceedings of the National Academy of Sciences 117, 12 (2020), 6370–6375. Retrieved from https://doi.org/10.1073/pnas.1910402117
[70]
Sahil Verma, Michael Ernst, and Rene Just. 2021. Removing biased data to improve fairness and accuracy. https://arxiv.org/abs/2102.03054
[71]
Sahil Verma and Julia Rubin. 2018. Fairness definitions explained. May 29, 2018. Gothenburg, Sweden. Retrieved from https://doi.org/10.1145/3194770.3194776
[72]
T. Franklin Waddell. 2018. A robot wrote this? How perceived machine authorship affects news credibility. Digital journalism 6, 2 (2018), 236–255. https://doi.org/10.1080/21670811.2017.1384319
[73]
Magdalena Wischnewski, Nicole Krämer, and Emmanuel Müller. 2023. Measuring and understanding trust calibrations for automated systems: A survey of the state-of-the-art and future directions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, April 19, 2023. ACM, Hamburg Germany, 1–16. https://doi.org/10.1145/3544548.3581197
[74]
Allison Woodruff, Sarah E. Fox, Steven Rousso-Schindler, and Jeffrey Warshaw. 2018. A qualitative exploration of perceptions of algorithmic fairness. In Proceedings of the 2018 CHI conference on human factors in computing systems, 2018. 1–14. https://doi.org/10.1145/3173574.3174230
[75]
Su-Fang Yeh, Meng-Hsin Wu, Tze-Yu Chen, Yen-Chun Lin, XiJing Chang, You-Hsuan Chiang, and Yung-Ju Chang. 2022. How to guide task-oriented chatbot users, and when: A mixed-methods study of combinations of chatbot guidance types and timings. In CHI Conference on Human Factors in Computing Systems, April 29, 2022. ACM, New Orleans LA USA, 1–16. https://doi.org/10.1145/3491102.3501941
[76]
Chien Wen (Tina) Yuan, Nanyi Bi, Ya-Fang Lin, and Yuen-Hsien Tseng. 2023. Contextualizing user perceptions about biases for human-centered explainable artificial intelligence. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, April 19, 2023. ACM, Hamburg Germany, 1–15. https://doi.org/10.1145/3544548.3580945
[77]
Robert B Zajonc. 2001. Mere exposure: A gateway to the subliminal. Current Directions in Psychological Science 10, 6 (2001), 224–228. Retrieved from https://doi.org/10.1111/1467-8721.00154
[78]
Wencan Zhang and Brian Y. Lim. 2022. Towards relatable explainable AI with the perceptual process. In CHI Conference on Human Factors in Computing Systems, 2022. 1–24. Retrieved from https://doi.org/10.1145/3491102.3501826

Index Terms

  1. When to Explain? Exploring the Effects of Explanation Timing on User Perceptions and Trust in AI systems
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image ACM Other conferences
            TAS '24: Proceedings of the Second International Symposium on Trustworthy Autonomous Systems
            September 2024
            335 pages
            ISBN:9798400709890
            DOI:10.1145/3686038
            This work is licensed under a Creative Commons Attribution International 4.0 License.

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 16 September 2024

            Check for updates

            Qualifiers

            • Research-article
            • Research
            • Refereed limited

            Conference

            TAS '24

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • 0
              Total Citations
            • 455
              Total Downloads
            • Downloads (Last 12 months)455
            • Downloads (Last 6 weeks)112
            Reflects downloads up to 01 Mar 2025

            Other Metrics

            Citations

            View Options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Login options

            Figures

            Tables

            Media

            Share

            Share

            Share this Publication link

            Share on social media