Skip to main content
Log in

The lack of meaningful boundary differences between journal impact factor quartiles undermines their independent use in research evaluation

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Journal impact factor (JIF) quartiles are often used as a convenient means of conducting research evaluation, abstracting the underlying JIF values. We highlight and investigate an intrinsic problem associated with this approach: the differences between quartile boundary JIF values are usually very small and often so small that journals in different quartiles cannot be considered meaningfully different with respect to impact. By systematically investigating JIF values in recent editions of the Journal Citation Reports (JCR) we determine it is typical to see between 10 and 30% poorly differentiated journals in the JCR categories. Social sciences are more affected than science categories. However, this global result conceals important variation and we also provide a detailed account of poor quartile boundary differentiation by constructing in-depth local quartile similarity profiles for each JCR category. Further systematic analyses show that poor quartile boundary differentiation tends to follow poor overall differentiation which naturally varies by field. In addition, in most categories the journals that experience a quartile shift are the same journals that are poorly differentiated. Our work provides sui generis documentation of the continuing phenomenon of impact factor inflation and also explains and reinforces some recent findings on the ranking stability of journals and on the JIF-based comparison of papers. Conceptually there is a fundamental problem in the fact that JIF quartile classes artificially magnify underlying differences that can be insignificant. We in fact argue that the singular use of JIF quartiles is a second order ecological fallacy. We recommend the abandonment of the quartiles reification as an independent method for the research assessment of individual scholars.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. Since 2018 China has surpassed the United States of America and is the top ranking country with respect to research articles and reviews indexed in the Science Citation Index Expanded (Liu 2020). The above-mentioned policies have no doubt contributed to this performance. However, Zhu (2020) has recently reported that 2020 is a year of major reform for the evaluation of research in China: recent policy documents jointly issued by the Ministry of Education and the Ministry of Science and Technology explicitly de-emphasize Science Citation Index publications.

  2. It is worth noting in passing that Elsevier launched in late 2016 a direct rival to the JIF in the form of the “CiteScore” (Zijlstra and McCullough 2016) together with an accompanying set of CiteScore metrics. The use of a compromise three-year citation window (recently changed to four years) was the main argument offered in support of the alternative metric’s “robust approach”.

  3. From the current JCR scope notes for category box plot it seems this specific article is the inspiration for the current implementation of the JIF quartile classes as a way to compare journal performance across categories.

  4. The account provided in this paragraph follows the description provided in the InCites Indicators Handbook (Clarivate Analytics 2018, p. 10).

  5. Hypothetical JIF values were generated in the R language and environment for statistical computing (R Core Team 2020) and follow a standard lognormal distribution; the lognormal distribution is known to adequately approximate various scientometric quantities, especially citations (Brito and Rodríguez-Navarro 2018; Thelwall 2016; Vîiu 2018).

  6. See (Hintze and Nelson 1998) for details. All figures in the paper were created in R with the ggplot2 package (Wickham 2016).

  7. For the fictitious category from Table 1 there are (262 – 26) / 2 = 325 JIF differences. Only 6.77% of these are not meaningful with δ = 0.1. Some of these non-meaningful differences are precisely those at the quartile boundaries (Q1, Q2) and (Q3, Q4) which lead to the 23% poorly differentiated journals in the category.

  8. This can also be seen in the LQS profiles of the subject categories in Electronic Supplementary Material 2.

  9. Although not our primary line of inquiry, we also looked at the evolution of mean JIF values in each JCR category between 2015 and 2018: all except 8 of the 234 categories that could be compared experienced an increase in the mean JIF, typically ranging between 12 and 43%.

References

Download references

Acknowledgements

The authors express their gratitude to the anonymous reviewers whose comments helped to improve significant aspects of the initial manuscript. This paper was financially supported by the Human Capital Operational Program 2014-2020, co-financed by the European Social Fund, under the project POCU/380/6/13/124708 no. 37141/23.05.2019 with the title “Researcher-Entrepreneur on Labour Market in the Fields of Intelligent Specialization (CERT-ANTREP)”, coordinated by the National University of Political Studies and Public Administration.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gabriel-Alexandru Vȋiu.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (CSV 327 kb)

Supplementary file2 (PDF 181 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vȋiu, GA., Păunescu, M. The lack of meaningful boundary differences between journal impact factor quartiles undermines their independent use in research evaluation. Scientometrics 126, 1495–1525 (2021). https://doi.org/10.1007/s11192-020-03801-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-020-03801-1

Keywords

Navigation