Skip to main content
Log in

Do research articles with more readable abstracts receive higher online attention? Evidence from Science

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

The value of scientific research is manifested in its impact in the scientific community as well as among the general public. Given the importance of abstracts in determining whether research articles (RAs) may be retrieved and read, recent research is paying attention to the effect of abstract readability on the scientific impact of RAs. However, to date little research has looked into the effect of abstract readability on the impact of RAs among the general public. To address this gap, this study reports on an investigation into the relationship between abstract readability and online attention received by RAs. Our dataset consisted of the abstracts of 550 RAs from 11 disciplines published in Science in 2012 and 2018. Thirty-nine lexical and syntactic complexity indices were employed to measure the readability of the abstracts, and the Altmetric attention scores of the RAs were used to measure the online attention they received. Results showed that abstract readability is significantly related to the online attention RAs receive, and that this relationship is significantly affected by discipline and publication time. Our findings have useful implications for making RA abstracts accessible to the general public.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. http://www.personal.psu.edu/xxl13/downloads.

References

  • Adie, E., & Roe, W. (2013). Altmetric: Enriching scholarly content with article-level discussion and metrics. Learned Publishing, 26(1), 11–17.

    Article  Google Scholar 

  • Altmetric Website. (2020). Colors of the donut. Retrieved on 30 August 2020 from https://www.altmetric.com/about-our-data/the-donut-and-score/

  • Altmetric Website. (2021). How is the Altmetric Attention Score calculated?. Retrieved on 25 April 2021 from https://help.altmetric.com/support/solutions/articles/6000233311-how-is-the-altmetric-attention-score-calculated-

  • Benjamin, R. G. (2012). Reconstructing readability: Recent developments and recommendations in the analysis of text difficulty. Educational Psychology Review, 24(1), 63–88.

    Article  Google Scholar 

  • Bornmann, L. (2014). Do altmetrics point to the broader impact of research? An overview of benefits and disadvantages of altmetrics. Journal of Informetrics, 8(4), 895–903.

    Article  Google Scholar 

  • Brigham, T. J. (2014). An introduction to Altmetrics. Medical Reference Services Quarterly, 33(4), 438–447.

    Article  Google Scholar 

  • Chen, B., Deng, D., Zhong, Z., & Zhang, C. (2020). Exploring linguistic characteristics of highly browsed and downloaded academic articles. Scientometrics, 122(3), 1769–1790.

    Article  Google Scholar 

  • Coiro, J. (2021). Toward a multifaceted heuristic of digital reading to inform assessment, research, practice, and policy. Reading Research Quarterly, 56(1), 9–31.

    Article  Google Scholar 

  • Costas, R., Zahedi, Z., & Wouters, P. (2015). Do “altmetrics” correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective. Journal of the Association for Information Science and Technology, 66(10), 2003–2019.

    Article  Google Scholar 

  • Crossley, S. A., Skalicky, S., & Dascalu, M. (2019). Moving beyond classic readability formulas: New methods and new models. Journal of Research in Reading, 42(3–4), 541–561.

    Article  Google Scholar 

  • D’Angelo, C. A., & Di Russo, S. (2019). Testing for universality of Mendeley readership distributions. Journal of Informetrics, 13(2), 726–737.

    Article  Google Scholar 

  • Díaz-Faes, A. A., Bowman, T. D., & Costas, R. (2019). Towards a second generation of ‘social media metrics’: Characterizing Twitter communities of attention around science. PLoS ONE, 14(5), 1–18.

    Article  Google Scholar 

  • Didegah, F., & Thelwall, M. (2013). Which factors help authors produce the highest impact research? Collaboration, journal and document properties. Journal of Informetrics, 7(4), 861–873.

    Article  Google Scholar 

  • Dolnicar, S., & Chapple, A. (2015). The readability of articles in tourism journals. Annals of Tourism Research, 52, 161–166.

    Article  Google Scholar 

  • Dronberger, G. B., & Kowitz, G. T. (1975). Abstract readability as a factor in information systems. Journal of the American Society for Information Science, 26(2), 108–111.

    Article  Google Scholar 

  • Flesch, R. (1948). A new readability yardstick. Journal of Applied Psychology, 32(3), 221–233.

    Article  Google Scholar 

  • Gazni, A. (2011). Are the abstracts of high impact articles more readable? Investigating the evidence from top research institutions in the world. Journal of Information Science, 37(3), 273–281.

    Article  Google Scholar 

  • Graesser, A. C., McNamara, D. S., & Kulikowich, J. M. (2011). Coh-Metrix: Providing multilevel analyses of text characteristics. Educational Researcher, 40(5), 223–234.

    Article  Google Scholar 

  • Guerini, M., Pepe, A., & Lepri, B. (2012, June). Do linguistic style and readability of scientific abstracts affect their virality? In Proceedings of the 6th International AAAI Conference on Weblogs and Social Media (pp. 475–478). Dublin, Ireland.

  • Haberlandt, K. F., & Graesser, A. C. (1985). Component processes in text comprehension and some of their interactions. Journal of Experimental Psychology: General, 114(3), 357–374.

    Article  Google Scholar 

  • Hartley, J. (2000). Clarifying the abstracts of systematic literature reviews. Bulletin of the Medical Library Association, 88(4), 332–337.

    Google Scholar 

  • Hartley, J., & Sydes, M. (1997). Are structured abstracts easier to read than traditional ones? Journal of Research in Reading, 20(2), 122–136.

    Article  Google Scholar 

  • Houghton, J. W., Henty, M., & Steele, C. (2004). Research practices and scholarly communication in the digital environment. Learned Publishing, 17(3), 231–249.

    Article  Google Scholar 

  • Htoo, T. H. H., & Na, J. C. (2017). Disciplinary differences in altmetrics for social sciences. Online Information Review, 41(2), 235–251.

    Article  Google Scholar 

  • Huang, W., Wang, P., & Wu, Q. (2018). A correlation comparison between Altmetric Attention Scores and citations for six PLOS journals. PLoS One, 13(4), e0194962.

  • Just, M. A., & Carpenter, P. A. (1980). A theory of reading: From eye fixations to comprehension. Psychological Review, 87(4), 329–354.

    Article  Google Scholar 

  • King, R. (1976). A comparison of the readability of abstracts with their source documents. Journal of the American Society for Information Science, 27(2), 118–121.

    Article  Google Scholar 

  • Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85(5), 363–394.

    Article  Google Scholar 

  • Klare, G. R. (1963). The measurement of readability. Iowa State University Press.

    Google Scholar 

  • Klare, G. R. (1974). Assessing readability. Reading Research Quarterly, 10(1), 62–102.

    Article  Google Scholar 

  • Larson-Hall, J. (2015). A guide to doing statistics in second language research using SPSS and R (2nd ed.). Routledge.

    Book  Google Scholar 

  • Laufer, B., & Nation, P. (1995). Vocabulary size and use: Lexical richness in L2 written production. Applied Linguistics, 16(3), 307–322.

    Article  Google Scholar 

  • Lei, L., & Yan, S. (2016). Readability and citations in information science: Evidence from abstracts and articles of four journals (2003–2012). Scientometrics, 108(3), 1155–1169.

    Article  MathSciNet  Google Scholar 

  • Lu, C., Bu, Y., Dong, X., Wang, J., Ding, Y., Larivière, V., Sugimoto, C. R., Paul, L., & Zhang, C. (2019). Analyzing linguistic complexity and scientific impact. Journal of Informetrics, 13(3), 817–829.

    Article  Google Scholar 

  • Lu, X. (2010). Automatic analysis of syntactic complexity in second language writing. International Journal of Corpus Linguistics, 15(4), 474–496.

    Article  Google Scholar 

  • Lu, X. (2012). The relationship of lexical richness to the quality of ESL learners’ oral narratives. Modern Language Journal, 96(2), 190–208.

    Article  Google Scholar 

  • Lu, X. (2014). Computational methods for corpus annotation and analysis. Springer.

    Book  MATH  Google Scholar 

  • Lu, X., Casal, J. E., & Liu, Y. (2020). The rhetorical functions of syntactically complex sentences in social science research article introductions. Journal of English for Academic Purposes, 44, 100832.

  • Lu, X., Gamson, D. A., & Eckert, S. A. (2014). Lexical difficulty and diversity in American elementary school reading textbooks: Changes over the past century. International Journal of Corpus Linguistics, 19(1), 94–117.

    Article  Google Scholar 

  • Maflahi, N., & Thelwall, M. (2018). How quickly do publications get read? The evolution of Mendeley reader counts for new articles. Journal of the Association for Information Science and Technology, 69(1), 158–167.

    Article  Google Scholar 

  • McLaughlin, G. H. (1969). SMOG grading: A new readability formula. Journal of Reading, 12(8), 639–646.

    Google Scholar 

  • Minnen, G., Carroll, J., & Pearce, D. (2001). Applied morphological processing of English. Natural Language Engineering, 7(3), 207–223.

    Article  Google Scholar 

  • Mohammadi, E., Thelwall, M., Haustein, S., & Larivière, V. (2015). Who reads research articles? An altmetrics analysis of Mendeley user categories. Journal of the Association for Information Science and Technology, 66(9), 1832–1846.

    Article  Google Scholar 

  • Newbold, N., & Gillam, L. (2010). The linguistics of readability: The next step for word processing. In Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics and Writing: Writing Processes and Authoring Aids (pp. 65–72). Los Angeles, California: Association for Computational Linguistics.

  • Ngai, C. S. B., & Singh, R. G. (2020). Relationship between persuasive metadiscoursal devices in research article abstracts and their attention on social media. PLoS One, 15(4), e0231305.

  • Nicholas, D., Huntington, P., & Watkinson, A. (2003). Digital journals, Big Deals and online searching behavior: A pilot study. Aslib Proceedings: New Information Perspectives, 55(1/2), 84–109.

    Article  Google Scholar 

  • OASIS Website. (2020). Why accessible summaries. Retrieved on 30 August 2020 from https://oasis-database.org/about

  • Perfetti, C. A. (2007). Reading ability: Lexical quality to comprehension. Scientific Studies of Reading, 11(4), 357–383.

    Article  Google Scholar 

  • Pinto, M., & Lancaster, F. W. (1999). Abstracts and abstracting in knowledge discovery. Library Trends, 48(1), 234–248.

  • Science Website. (2020a). Mission and scope. Retrieved on 30 August 2020 from https://www.sciencemag.org/about/mission-and-scope

  • Science Website. (2020b). Information for authors. Retrieved on 30 August 2020 from https://www.sciencemag.org/authors/science-information-authors

  • Science Website. (2020c). Instructions for preparing an initial manuscript. Retrieved on 30 August 2020 from https://www.sciencemag.org/authors/instructions-preparing-initial-manuscript

  • Sienkiewicz, J., & Altmann, E. G. (2016). Impact of lexical and sentiment factors on the popularity of scientific papers. Royal Society Open Science, 3(6), 160140.

  • Snow, C. E. (2010). Academic language and the challenge of reading for learning about science. Science, 328(5977), 450–453.

    Article  Google Scholar 

  • Stevens, R. J., Lu, X., Baker, D. P., Ray, M. N., Eckert, S. A., & Gamson, D. A. (2015). Assessing the cognitive demands of elementary school reading curricula: An analysis of reading text and comprehension tasks from 1910 to 2000. American Educational Research Journal, 52(3), 582–617.

    Article  Google Scholar 

  • Sud, P., & Thelwall, M. (2014). Evaluating altmetrics. Scientometrics, 98(2), 1131–1143.

    Article  Google Scholar 

  • Sugimoto, C. R., Work, S., Larivière, V., & Haustein, S. (2017). Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology, 68(9), 2037–2062.

    Article  Google Scholar 

  • Syamili, C., & Rekha, R. V. (2017). Do altmetric correlate with citation?: A study basedon PLOS ONE journal. COLLNET Journal of Scientometrics and Information Management, 11(1), 103–117.

    Article  Google Scholar 

  • Tankó, G. (2017). Literary research article abstracts: An analysis of rhetorical moves and their linguistic realizations. Journal of English for Academic Purposes, 27, 42–55.

    Article  Google Scholar 

  • Thelwall, M. (2018). Early Mendeley readers correlate with later citation counts. Scientometrics, 115(3), 1231–1240.

    Article  Google Scholar 

  • Thelwall, M., Haustein, S., Larivière, V., & Sugimoto, C. R. (2013). Do altmetrics work? Twitter and ten other social web services. PLoS ONE, 8(5), 1–7.

    Article  Google Scholar 

  • Toutanova, K., Klein, D., Manning, C., & Singer, Y. (2003). Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (pp. 252–259). Edmonton, Alberta, Canada: The Association for Computational Linguistics.

  • Trueger, N. S., Thoma, B., Hsu, C. H., Sullivan, D., Peters, L., & Lin, M. (2015). The altmetric score: A new measure for article-level dissemination and impact. Annals of Emergency Medicine, 66(5), 549–553.

    Article  Google Scholar 

  • Vajjala, S., & Meurers, D. (2012, June). On improving the accuracy of readability classification using insights from second language acquisition. In Proceedings of the 7th Workshop on Innovative Use of NLP for Building Educational Applications (pp. 163–173). Montréal, Canada: Association for Computational Linguistics.

  • Verma, S., & Madhusudhan, M. (2019). An altmetric comparison of highly cited digital library publications of India and China. Annals of Library and Information Studies, 66(2), 71–75.

    Google Scholar 

  • Weil, B. H. (1970). Standards for writing abstracts. Journal of the American Society for Information Science, 21(5), 351–357.

    Article  Google Scholar 

  • Wolfe-Quintero, K., Inagaki, S., & Kim, H.-Y. (1998). Second language development in writing: Measures of fluency, accuracy, and complexity. University of Hawaii Press.

    Google Scholar 

  • Zahedi, Z., Costas, R., & Wouters, P. (2014). How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. Scientometrics, 101(2), 1491–1513.

    Article  Google Scholar 

Download references

Funding

This research was supported by a grant from the National Social Science Fund of China (18BYY110) to the first author.

Author information

Authors and Affiliations

Authors

Contributions

TJ: Conceptualization, Methodology, Writing—review & editing. HD: Data curation, Investigation, Writing—review & editing. XL: Conceptualization, Methodology, Writing—review & editing. JN: Methodology, Investigation, Writing—review & editing. KG: Methodology, Investigation, Writing—review & editing.

Corresponding author

Correspondence to Kai Guo.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, T., Duan , H., Lu, X. et al. Do research articles with more readable abstracts receive higher online attention? Evidence from Science. Scientometrics 126, 8471–8490 (2021). https://doi.org/10.1007/s11192-021-04112-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-021-04112-9

Keywords

Navigation