Abstract
Monetary rewards granted on a per-publication basis to individual authors are an important policy instrument to stimulate scientific research. An inconsistent feature of many article reward schemes is that they use journal-level citation metrics. In this paper we assess the actual article-level citation impact of about 10,000 articles whose authors received financial rewards within the Romanian Program for Rewarding Research Results (PR3), an exemplary money-per-publication program that uses journal metrics to allocate rewards. We present PR3, offer a comprehensive empirical analysis of its results and a scientometric critique of its methodology. We first use a reference dataset of 1.9 million articles to compare the impact of each rewarded article from five consecutive PR3 editions to the impact of all the other articles published in the same journal and year. To determine the wider global impact of PR3 papers we then further benchmark their citation performance against the worldwide field baselines and percentile rank classes from the Clarivate Analytics Essential Science Indicators. We find that within their journals PR3 articles span the full range of citation impact almost uniformly. In the larger context of global broad fields of science almost two thirds of the rewarded papers are below the world average in their field and more than a third lie below the world median. Although desired by policymakers to exemplify excellence many PR3 articles are characterized by a rather commonplace individual citation performance and have not achieved the impact presumed and rewarded after publication based on journal metrics. Furthermore, identical rewards have been offered to articles with markedly different impact. Direct monetary incentives for articles may support productivity but they cannot guarantee impact.
Similar content being viewed by others
Notes
Note also that the per-publication nature of PR3 precludes the necessity of translating the publications to any intermediate system of points that would then determine rewards.
We considered whether this might also be due to financial constraints in the respective years but cross-referencing budget specifications from the information packages with figures on the total amounts disbursed for the articles indicates that in fact the initially allocated budget was not exhausted in any of the three years. For 2011 and 2012 the exclusion of proceedings papers from eligible documents (except for social sciences which are, however, poorly represented) might also explain the decrease.
As is known to WoS users some documents can have multiple attribution. Those listed as “article; proceedings paper”, “article; book chapter”, etc. are not eligible submissions in PR3.
These figures are based on calculations considering the annual exchange rate from the National Bank of Romania (listed in Table 1) and net income data from the Romanian National Institute of Statistics (https://insse.ro/cms/ro/content/c%C3%A2%C8%99tiguri-salariale-din-1938-serie-anual%C4%83-0).
Knowledgeable readers of the journal are undoubtedly familiar with at least some and we therefore relegate their exposition to the part of the paper where we discuss our results.
Note that together with impact these are exactly the aspects PR3 states that it is rewarding.
It also has the benefit of avoiding the problems associated with the classification of journals in the WoS categories, especially the issue of multiple assignment of papers to more than one category, with which we engage in the discussion section.
These mostly include proceedings papers and reviews but also two retracted publications, both included in the red reward class, published in 2014 (one in the International Journal of Obesity, the other in Diabetes) and rewarded in the 2015 PR3. The PR3 information packages do not mention the unlikely eventuality of article retraction and any steps that might be taken in such cases.
Similarly, our investigation does not include articles published in 2015 but rewarded in the 2016 PR3.
The information was published (and is still published for recent PR3 editions) in multiple pdf files made public as submissions are processed. In total, 36 such files contain the information for the 2011–2015 competitions considered in our investigation. They are available (under the heading for each year) at the following link: http://old.uefiscdi.ro/articole/1722/Articole.html. There is a notable inconsistency in the way information was published in the lists from one year to another and sometimes even between the lists belonging to the same year. We opted to keep only the minimal set of variables that were reported consistently across the entire five-year window and only for the articles that were accepted for a reward.
Note that this superset does not include all WoS-indexed publications from 2011–2015 since many of the thousands of indexed journals did not publish any PR3 article or were not eligible for submission in PR3.
Our assignment of PR3 papers to ESI categories was based on the mapping of their publishing journals to the broad categories most recently updated in the June 2020 ESI master journal list. A limitation we must acknowledge is that papers assigned to the multidisciplinary category in our datasets are not necessarily assigned to this particular category in the ESI where information on the cited references is used to attribute the papers to the other categories. Of the 10,281 PR3 papers 229 have been attributed to the multidisciplinary category.
Bornmann and Williams also propose the alternative CP-IN indicator which has a different interpretation: it represents the cumulative percentage of papers having a citation impact lower than or equivalent to a focal paper. For the three examples above the CP-IN values for the three focal papers would be exactly 10, 50 and 90. A disadvantage of CP-IN is that in reference sets with uncited items the papers with no citations would appear to have some sort of citation impact as a consequence of the or equivalent to clause. In CP-EX the percentile corresponding to the first empirical citation count in the reference distribution is 0 and this value can be expected to be overrepresented compared to the others.
The averages for broad fields—and the percentile rank class thresholds—are calculated from all articles and reviews indexed in WoS and therefore reflect the complete citation performance in a broad field. Our journal superset data capture only a part of this performance.
A routine statistical analysis for the citation counts of all articles in all the 3971 journal-year combinations from the superset corresponding to the 9812 PR3 papers shows skewness coefficients above 1 for all but 28 cases, above 2 for all but 784 and above 3 for no fewer than 1978.
References
Aagaard, K. (2015). How incentives trickle down: Local use of a national bibliometric indicator system. Science and Public Policy, 42(5), 725–737. https://doi.org/10.1093/scipol/scu087
Aboal, D., & Tacsir, E. (2017). The impact of subsidies on researcher’s productivity: Evidence from a developing country. Research Evaluation, 26(4), 269–283. https://doi.org/10.1093/reseval/rvx031
Abritis, A., & McCook, A. (2017). Cash incentives for papers go global. Science, 357(6351), 541–541. https://doi.org/10.1126/science.357.6351.541
Albarrán, P., Crespo, J. A., Ortuño, I., & Ruiz-Castillo, J. (2011). The skewness of science in 219 sub-fields and a number of aggregates. Scientometrics, 88(2), 385–397. https://doi.org/10.1007/s11192-011-0407-9
Bak, H.-J., & Kim, D. H. (2019). The unintended consequences of performance-based incentives on inequality in scientists’ research performance. Science and Public Policy, 46(2), 219–231. https://doi.org/10.1093/scipol/scy052
Bergstrom, C. (2007). Eigenfactor: Measuring the value and prestige of scholarly journals. College & Research Libraries News, 68(5), 314–316. https://doi.org/10.5860/crln.68.5.7804
Bergstrom, C., West, J., & Wiseman, M. (2008). The EigenfactorTM metrics. Journal of Neuroscience, 28(45), 11433–11434. https://doi.org/10.1523/JNEUROSCI.0003-08.2008
Bornmann, L., & Pudovkin, A. I. (2017). The journal impact factor should not be discarded. Journal of Korean Medical Science, 32(2), 180–182. https://doi.org/10.3346/jkms.2017.32.2.180
Bornmann, L., & Williams, R. (2020). An evaluation of percentile measures of citation impact, and a proposal for making them better. Scientometrics, 124(2), 1457–1478. https://doi.org/10.1007/s11192-020-03512-7
Bornmann, L., Leydesdorff, L., & Mutz, R. (2013). The use of percentiles and percentile rank classes in the analysis of bibliometric data: Opportunities and limits. Journal of Informetrics, 7(1), 158–165. https://doi.org/10.1016/j.joi.2012.10.001
Braun, T., & Glänzel, W. (1996). International collaboration: Will it be keeping alive East European research? Scientometrics, 36(2), 247–254. https://doi.org/10.1007/BF02017317
Clarivate Analytics. (2018). In: Cites indicators handbook. http://help.incites.clarivate.com/inCites2Live/8980-TRS/version/default/part/AttachmentData/data/InCites-Indicators-Handbook-June2018.pdf
Cleere, L., & Ma, L. (2018). A local adaptation in an output-based research support scheme (OBRSS) at University College Dublin. Journal of Data and Information Science, 3(4), 74–84. https://doi.org/10.2478/jdis-2018-0022
Curry, S. (2018). Let’s move beyond the rhetoric: it’s time to change how we judge research. Nature, 554(7691), 147–147. https://doi.org/10.1038/d41586-018-01642-w
Cutas, D., & Shaw, D. (2015). Writers blocked: On the wrongs of research co-authorship and some possible strategies for improvement. Science and Engineering Ethics, 21(5), 1315–1329. https://doi.org/10.1007/s11948-014-9606-0
Demir, S. B. (2018). Pros and cons of the new financial support policy for Turkish researchers. Scientometrics, 116(3), 2053–2068. https://doi.org/10.1007/s11192-018-2833-4
European Commission. (2018). Science, research and innovation performance of the Eu 2018. Strengthening the foundations for Europe’s future. Science, Research and Innovation Performance of the EU. https://doi.org/10.2777/14136
Franceschet, M. (2010). Ten good reasons to use the EigenfactorTM metrics. Information Processing & Management, 46(5), 555–558. https://doi.org/10.1016/j.ipm.2010.01.001
Franzoni, C., Scellato, G., & Stephan, P. (2011). Changing incentives to publish. Science, 333(6043), 702–703. https://doi.org/10.1126/science.1197286
Geuna, A., & Martin, B. (2003). University research evaluation and funding: an international comparison. Minerva, 41, 277–304.
Gingras, Y. (2016). Bibliometrics and research evaluation : uses and abuses. The MIT Press.
Good, B., Vermeulen, N., Tiefenthaler, B., & Arnold, E. (2015). Counting quality? The Czech performance-based research funding system. Research Evaluation, 24(2), 91–105. https://doi.org/10.1093/reseval/rvu035
Hammarfelt, B., & de Rijcke, S. (2015). Accountability in context: effects of research evaluation systems on publication practices, disciplinary norms, and individual working routines in the faculty of Arts at Uppsala University. Research Evaluation, 24(1), 63–77. https://doi.org/10.1093/reseval/rvu029
Hedding, D. W. (2019). Payouts push professors towards predatory journals. Nature, 565(7739), 267–267. https://doi.org/10.1038/d41586-019-00120-1
Heywood, J. S., Wei, X., & Ye, G. (2011). Piece rates for professors. Economics Letters, 113(3), 285–287. https://doi.org/10.1016/j.econlet.2011.08.005
Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251–261. https://doi.org/10.1016/j.respol.2011.09.007
Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). The Leiden Manifesto for research metrics. Nature, 520(7548), 9–11. https://doi.org/10.1038/520429a
Ioannidis, J. P. A., Klavans, R., & Boyack, K. W. (2018). Thousands of scientists publish a paper every five days. Nature, 561(7722), 167–169. https://doi.org/10.1038/d41586-018-06185-8
Jiménez-Contreras, E., de Moya Anegón, F., & López-Cózar, E. D. (2003). The evolution of research activity in Spain. Research Policy, 32(1), 123–142. https://doi.org/10.1016/S0048-7333(02)00008-2
Kim, D. H., & Bak, H.-J. (2016). How do scientists respond to performance-based incentives? Evidence from South Korea. International Public Management Journal, 19(1), 31–52. https://doi.org/10.1080/10967494.2015.1032460
Korytkowski, P., & Kulczycki, E. (2019). Examining how country-level science policy shapes publication patterns: The case of Poland. Scientometrics, 119(3), 1519–1543. https://doi.org/10.1007/s11192-019-03092-1
Kozak, M., Bornmann, L., & Leydesdorff, L. (2014). How have the Eastern European countries of the former Warsaw Pact developed since 1990? A bibliometric study. Scientometrics, 102(2), 1101–1117. https://doi.org/10.1007/s11192-014-1439-8
Kozlowski, J., Radosevic, S., & Ircha, D. (1999). History matters: The inherited disciplinary structure of the post-communist science in countries of central and eastern Europe and its restructuring. Scientometrics, 45(1), 137–166. https://doi.org/10.1007/BF02458473
Kulczycki, E. (2017). Assessing publications through a bibliometric indicator: The case of comprehensive evaluation of scientific units in Poland. Research Evaluation, 26(1), 1–12. https://doi.org/10.1093/reseval/rvw023
Larivière, V., & Sugimoto, C. R. (2019). The journal impact factor: A brief history, critique, and discussion of adverse effects. In W. Glänzel, H. F. Moed, U. Schmoch, & M. Thelwall (Eds.), Springer handbook of science and technology indicators (pp. 3–24). Cham: Springer. https://doi.org/10.1007/978-3-030-02511-3_1
Larivière, V., Kiermer, V., MacCallum, C. J., McNutt, M., Patterson, M., Pulverer, B., et al. (2016). A simple proposal for the publication of journal citation distributions. bioRxiv. https://doi.org/10.1101/062109
Leydesdorff, L., Bornmann, L., & Adams, J. (2019). The integrated impact indicator revisited (I3*): A non-parametric alternative to the journal impact factor. Scientometrics, 119(3), 1669–1694. https://doi.org/10.1007/s11192-019-03099-8
Leydesdorff, L., Wouters, P., & Bornmann, L. (2016). Professional and citizen bibliometrics: Complementarities and ambivalences in the development and use. Scientometrics, 109(3), 2129–2150. https://doi.org/10.1007/s11192-016-2150-8
Liu, W., Hu, G., & Gu, M. (2016). The probability of publishing in first-quartile journals. Scientometrics, 106(3), 1273–1276. https://doi.org/10.1007/s11192-015-1821-1
Lozano, G. A., Larivière, V., & Gingras, Y. (2012). The weakening relationship between the impact factor and papers’ citations in the digital age. Journal of the American Society for Information Science and Technology, 63(11), 2140–2145. https://doi.org/10.1002/asi.22731
Ma, L. (2019). Money, morale, and motivation: A study of the output-based research support scheme in University College Dublin. Research Evaluation, 28(4), 304–312. https://doi.org/10.1093/reseval/rvz017
MacRoberts, M. H., & MacRoberts, B. R. (1996). Problems of citation analysis. Scientometrics, 36(3), 435–444. https://doi.org/10.1007/BF02129604
Milojević, S. (2020). Practical method to reclassify Web of Science articles into unique subject categories and broad disciplines. Quantitative Science Studies, 1(1), 183–206. https://doi.org/10.1162/qss_a_00014
Miranda, R., & Garcia-Carpintero, E. (2019). Comparison of the share of documents and citations from different quartile journals in 25 research areas. Scientometrics, 121(1), 479–501. https://doi.org/10.1007/s11192-019-03210-z
Miroiu, A., & Vlăsceanu, L. (2012). Relating quality and funding: The Romanian case. In A. Curaj, P. Scott, L. Vlăsceanu, & L. Wilson (Eds.), European higher education at the crossroads (pp. 791–807). Dordrecht: Springer, Netherlands. https://doi.org/10.1007/978-94-007-3937-6_41
Moed, H. F. (2007). The future of research evaluation rests with an intelligent combination of advanced metrics and transparent peer review. Science and Public Policy, 34(8), 575–583. https://doi.org/10.3152/030234207X255179
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277. https://doi.org/10.1016/j.joi.2010.01.002
Müller, R., & de Rijcke, S. (2017). Thinking with indicators. Exploring the epistemic impacts of academic performance indicators in the life sciences. Research Evaluation, 26(3), 157–168. https://doi.org/10.1093/reseval/rvx023
Must, Ü. (2006). “New”countries in Europe—Research, development and innovation strategies versus bibliometric data. Scientometrics, 66(2), 241–248. https://doi.org/10.1007/s11192-006-0016-1
Neff, M. W. (2018). Publication incentives undermine the utility of science: Ecological research in Mexico. Science and Public Policy, 45(2), 191–201. https://doi.org/10.1093/scipol/scx054
Osterloh, M., & Frey, B. S. (2020). How to avoid borrowed plumes in academia. Research Policy, 49(1), 103831. https://doi.org/10.1016/j.respol.2019.103831
Osuna, C., Cruz-Castro, L., & Sanz-Menéndez, L. (2011). Overturning some assumptions about the effects of evaluation systems on publication performance. Scientometrics, 86(3), 575–592. https://doi.org/10.1007/s11192-010-0312-7
Pajić, D. (2015). Globalization of the social sciences in Eastern Europe: genuine breakthrough or a slippery slope of the research evaluation practice? Scientometrics, 102(3), 2131–2150. https://doi.org/10.1007/s11192-014-1510-5
Pendlebury, D. A. (2009). The use and misuse of journal metrics and other citation indicators. Archivum Immunologiae et Therapiae Experimentalis, 57(1), 1–11. https://doi.org/10.1007/s00005-009-0008-y
Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2017). A comparison of the web of science and publication-level classification systems of science. Journal of Informetrics, 11(1), 32–45. https://doi.org/10.1016/j.joi.2016.10.007
Pisár, P., & Šipikal, M. (2017). Negative effects of performance based funding of universities: The case of Slovakia. NISPAcee Journal of Public Administration and Policy, 10(2), 171–189. https://doi.org/10.1515/nispa-2017-0017
Pisár, P., Šipikal, M., Jahoda, R., & Špaček, D. (2019). Performance based funding of universities: Czech Republic and Slovakia. In M. S. de Vries, J. Nemec, & D. Špaček (Eds.), Performance-based budgeting in the public sector (pp. 237–254). Cham: Palgrave Macmillan. https://doi.org/10.1007/978-3-030-02077-4_13
Pudovkin, A. I., & Garfield, E. (2002). Algorithmic procedure for finding semantically related journals. Journal of the American Society for Information Science and Technology, 53(13), 1113–1119. https://doi.org/10.1002/asi.10153
Quan, W., Chen, B., & Shu, F. (2017). Publish or impoverish. Aslib Journal of Information Management, 69(5), 486–502. https://doi.org/10.1108/AJIM-01-2017-0014
R Core Team. (2020). R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. https://www.r-project.org/.
Ruiz-Castillo, J., & Costas, R. (2018). Individual and field citation distributions in 29 broad scientific fields. Journal of Informetrics, 12(3), 868–892. https://doi.org/10.1016/j.joi.2018.07.002
Ruiz-Castillo, J., & Waltman, L. (2015). Field-normalized citation impact indicators using algorithmically constructed classification systems of science. Journal of Informetrics, 9(1), 102–117. https://doi.org/10.1016/j.joi.2014.11.010
Sandoval-Romero, V., & Larivière, V. (2020). The national system of researchers in Mexico: Implications of publication incentives for researchers in social sciences. Scientometrics, 122(1), 99–126. https://doi.org/10.1007/s11192-019-03285-8
Sandström, U., & Van den Besselaar, P. (2018). Funding, evaluation, and the performance of national research systems. Journal of Informetrics, 12(1), 365–384. https://doi.org/10.1016/j.joi.2018.01.007
Schneider, J. W. (2009). An outline of the bibliometric indicator used for performance-based funding of research institutions in Norway. European Political Science, 8(3), 364–378. https://doi.org/10.1057/eps.2009.19
Schubert, A., & Braun, T. (1996). Cross-field normalization of scientometric indicators. Scientometrics, 36(3), 311–324. https://doi.org/10.1007/BF02129597
Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43(9), 628–638. https://doi.org/10.1002/(SICI)1097-4571(199210)43:9%3c628::AID-ASI5%3e3.0.CO;2-0
Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating research. BMJ, 314(7079), 497–497. https://doi.org/10.1136/bmj.314.7079.497
Shu, F., Quan, W., Chen, B., Qiu, J., Sugimoto, C. R., & Larivière, V. (2020). The role of Web of Science publications in China’s tenure system. Scientometrics, 122(3), 1683–1695. https://doi.org/10.1007/s11192-019-03339-x
Sivertsen, G. (2016). Publication-based funding: The Norwegian model. In M. Ochsner, S. E. Hug, & H.-D. Daniel (Eds.), Research assessment in the humanities (pp. 79–90). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-29016-4_7
Teodorescu, D., & Andrei, T. (2011). The growth of international collaboration in East European scholarly communities: A bibliometric analysis of journal articles published between 1989 and 2009. Scientometrics, 89(2), 711–722. https://doi.org/10.1007/s11192-011-0466-y
Thelwall, M. (2020). Large publishing consortia produce higher citation impact research but coauthor contributions are hard to evaluate. Quantitative Science Studies, 1(1), 290–302. https://doi.org/10.1162/qss_a_00003
Tonta, Y. (2018). Does monetary support increase the number of scientific papers? An interrupted time series analysis. Journal of Data and Information Science, 3(1), 19–39. https://doi.org/10.2478/jdis-2018-0002
Tonta, Y., & Akbulut, M. (2020). Does monetary support increase citation impact of scholarly papers? Scientometrics, 125(2), 1617–1641. https://doi.org/10.1007/s11192-020-03688-y
Trow, M. (1994). Managerialism and the academic profession: The case of England. Higher Education Policy, 7(2), 11–18. https://doi.org/10.1057/hep.1994.13
Vanecek, J. (2014). The effect of performance-based research funding on output of R & D results in the Czech Republic. Scientometrics, 98(1), 657–681. https://doi.org/10.1007/s11192-013-1061-1
Vîiu, G.-A., Păunescu, M., & Miroiu, A. (2016). Research-driven classification and ranking in higher education: An empirical appraisal of a Romanian policy experience. Scientometrics, 107(2), 785–805. https://doi.org/10.1007/s11192-016-1860-2
Vȋiu, G.-A., & Păunescu, M. (2021). The lack of meaningful boundary differences between journal impact factor quartiles undermines their independent use in research evaluation. Scientometrics. https://doi.org/10.1007/s11192-020-03801-1
Vinkler, P. (2008). Correlation between the structure of scientific research, scientometric indicators and GDP in EU and non-EU countries. Scientometrics, 74(2), 237–254. https://doi.org/10.1007/s11192-008-0215-z
Vlăsceanu, L., & Hâncean, M.-G. (2015). Policy incentives and research productivity in the Romanian higher education. An institutional approach. In A. Curaj, L. Matei, R. Pricopie, J. Salmi, & P. Scott (Eds.), The European higher education area (pp. 185–203). Springer International Publishing. https://doi.org/10.1007/978-3-319-20877-0_13
Waltman, L., & van Eck, N. J. (2010). The relation between Eigenfactor, audience factor, and influence weight. Journal of the American Society for Information Science and Technology, 61(7), 1476–1486. https://doi.org/10.1002/asi.21354
Wang, Q., & Waltman, L. (2016). Large-scale analysis of the accuracy of the journal classification systems of Web of Science and Scopus. Journal of Informetrics, 10(2), 347–364. https://doi.org/10.1016/j.joi.2016.02.003
Wickham, H. (2016). ggplot2: Elegant graphics for data analysis. Springer-Verlag.
Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., et al. (2015). The metric tide : Report of the Independent review of the role of metrics in research assessment and management. https://doi.org/10.13140/RG.2.1.4929.1363
Acknowledgements
Anonymous reviewer comments that helped to clarify and improve aspects of the initial manuscript are gratefully acknowledged by the authors. This paper was financially supported by the Human Capital Operational Program 2014-2020, co-financed by the European Social Fund, under the project POCU/380/6/13/124708 No. 37141/23.05.2019 with the title “Researcher-Entrepreneur on Labour Market in the Fields of Intelligent Specialization (CERT-ANTREP)”, coordinated by the National University of Political Studies and Public Administration.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Vîiu, GA., Păunescu, M. The citation impact of articles from which authors gained monetary rewards based on journal metrics. Scientometrics 126, 4941–4974 (2021). https://doi.org/10.1007/s11192-021-03944-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-021-03944-9