Skip to main content
Log in

The journal impact factor: angel, devil, or scapegoat? A comment on J.K. Vanclay’s article 2011

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

J.K. Vanclay’s article is a bold attempt to review recent works on the journal impact factor (JIF) and to call for alternative certifications of journals. The too broad scope did not allow the author to fulfill all his purposes. Attempting after many others to organize the various forms of criticism, with targets often broader than the JIF, we shall try to comment on a few points. This will hopefully enable us to infer in which cases the JIF is an angel, a devil, or a scapegoat. We shall also expand on a crucial question that Vanclay could not really develop in the reduced article format: the field-normalization. After a short recall on classical cited-side or ex post normalization and of the powerful influence measures, we will devote some attention to the novel way of citing-side or ex ante normalization, not only for its own interest, but because it directly proceeds from the disassembling of the JIF clockwork.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. JKV in the followings.

  2. For example in the category "distribution", one might infer that the non-normality is a recent finding, whereas it is the basis of one of the most cited papers about the JIF (Seglen) put in another category. Fortunately, some older references resisted the filtering.

  3. http://scientific.thomson.com/free/essays/citationindexing/history/.

  4. Individual-level productivity modeling is now partly directed towards the predictive capability of the h-index (Hirsch 2007), a matter of controversy (Hönekopp and Khan 2011). In certain areas (biology), the presence of star scientists is not only held as a predictor of future scientific success but also of industrial success and radical changes (in biology: Zucker and Darby 1996).

  5. Four conjectures on impact factors (Jacques Ninio)Each statistical indicator entails biases which need to be identified and corrected. However, in the evaluation of scientific research, popularity tests are used as substitutes for quality tests, a practice which penalizes our most original productions.Conjecture 1: The impact factor of a journal is directly correlated with the incompetence of those who cite it [this argument against multidisciplinary journals]Conjecture 2: Impact factors are directly correlated with lack of originality. Conjecture 3: The impact factor of a journal is inversely correlated with the longevity of the articles it publishes. Conjecture 4: The impact factor of a journal is directly correlated with its rate of fraudulent articles. Source: www.lps.ens.fr/~ninio/impact-factor.htm.

  6. In Adler, Ewing and Taylor, Report of the International Mathematical Union (Adler et al. 2008; see also Adler et al. 2009): “While numbers appear to be ‘objective’, their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. The sole reliance on citation data provides at best an incomplete and often shallow understanding of research—an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments.”

    Source: http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf.

  7. A classical way to express this is to see an actor’s real impact as the product of the “expected impact” (the actor’s impact factor, only depending on journals of publication) and the “relative citation rate” (RCR, Schubert and Braun 1986), particular form of journal-level normalized indicator with neutral value 1 (non-unique decomposition). A RCR type indicator, at another level normalization (fields), is the CWTS “crown indicator” in its original form.

  8. At least do discourage copy-paste and blunt forms of plagiarism. Sophisticated forms may trigger a race, in computational linguistics applications, between anti-plagiarism tools and text transformation tools.

  9. Quite difficult to interpret, however, if only because of authors' anticipations.

  10. OST Paris made this choice, for example, in 1992, after other producers.

  11. On a systematic treatment of these differences, see for example Ingwersen et al. (2001).

  12. The statistical relation of the h-index to JIF and size is studied in Glänzel and Schubert (2007).

  13. Bibliometric usage tends to include note and letters, depending on the database, as citable documents. Especially for border types, the qualification of a given document may differ between the journal publisher and the databases and among databases.

  14. The selection process used to be criticized for a bias against non-English speaking journals, European journals, and emerging journals. The situation seems to have improved, due to competition perhaps, and also to the fact that European and non-mainstream journals of good quality turned to English and international openness. Another limitation, quite difficult to cope with, is the possible lack of coverage of small fields in applied science, often importers of knowledge and not well seen from other specialties.

  15. For an actor or a journal i in a field J, the quotient of the actors’ impact to the reference’s (e.g., world) impact.

  16. This solution, in addition to the relative impacts calculated at each level, was implemented by S. Ramanana-Rahary at OST.

  17. "Fractional citations" were applied by Small and Sweeney (1985) to the metrics of co-citation mapping not to be confused with the fractional count of citations to multiple co-authors in a paper. To our best knowledge, the mention of citing-side normalization for impact calculation, not embedded in influence flows, appears in Zitt et al. (2005 op. cit).

  18. The only one cited by JKV. This is all the more regrettable that the papers by Leydesdorff on the topic, so far, show a slightly subjective view of the history of the research front.

  19. If those terms citing-side, fractional, source-level, etc., are equivalent in terms of principle, they were often coined along with particular methodological choices.

  20. In the original Audience Factor, the journal level is used for both on citing and cited-side. Emitted citations in the citing journal are weighted in inverse proportion of the average length of bibliographies in this journal's articles. Most other developments use a finer granularity on the citing-side.

    .

References

  • Adams, J., Gurney, K., & Marshall, S. (2007). Profiling citation impact: a new methodology. Scientometrics, 72(2), 325–344.

    Article  Google Scholar 

  • Adler, R., Ewing, J., & Taylor, P. (2008). Citation statistics. A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the International Council of Institute of Mathematical Statistics (IMS). Summarized in (2009). Statistical Science, 24(1), 1–14.

    Article  MathSciNet  Google Scholar 

  • Bar-Ilan, J. (2008). Which h-index? – A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257–271.

    Article  Google Scholar 

  • Bergstrom, C. (2007). Eigenfactor: measuring the value and prestige of scholarly journals. College & Research Libraries News, 68, 5, www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/may2007/eigenfactor.cfm.

  • Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, 4(6), e6022. doi:10.1371/journal.pone.0006022.

    Article  Google Scholar 

  • Bourdieu, P. (1975). The specificity of the scientific field and the social conditions of the progress of reason. Social Science Information, 14(6), 19–47.

    Article  Google Scholar 

  • Bouyssou, D., & Marchant, T. (2011). Bibliometric rankings of journals based on impact factors: an axiomatic approach. Journal of Informetrics, 5(1), 75–86. doi:10.1016/j.joi.2010.09.001.

    Article  MathSciNet  Google Scholar 

  • Braun, T., Glänzel, W., & Schubert, A. (2006). A Hirsch-type index for journals. Scientometrics, 69, 169–173.

    Article  Google Scholar 

  • Callon, M., & Latour, B. (1981). Unscrewing the big leviathan: how actors macrostructure reality and how sociologists help them to do so. In Karin. D. Knorr Cetina & Aaraon. V. Cicourel (Eds.), Advances in social theory and methodology: toward an integration of micro- and macro-sociologies (pp. 277–303). Boston: Routledge and Kegan Paul.

    Google Scholar 

  • Cronin, B. (1984). The citation process; the role and significance of citations in scientific communication. London: Taylor Graham.

    Google Scholar 

  • Czapski, G. (1997). The use of deciles of the citation impact to evaluate different fields of research in Israel. Scientometrics, 40(3), 437–443.

    Article  Google Scholar 

  • de Moya-Anegon, F. (2007). SCImago. SJR — SCImago Journal & Country Rank.

  • de Solla Price, D. J. (1963). Little science, big science. New York: Columbia University Press.

    Google Scholar 

  • Garfield, E. (1955). Citation Indexes for Science. A new dimension in documentation through association of ideas. Science, 122, 108–111.

    Article  Google Scholar 

  • Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178(4060), 471–479.

    Article  Google Scholar 

  • Garfield, E. (2006). The history and meaning of the journal impact factor. Journal of the American Medical Association, 295, 90–93.

    Article  Google Scholar 

  • Garfield, E., & Sher, I. H. (1963). New factors in the evaluation of scientific literature through citation indexing. American Documentation, 14(3), 195–201.

    Article  Google Scholar 

  • Geller, N. L. (1978). Citation influence methodology of Pinski and Narin. Information Processing and Management, 14, 93–95.

    Article  MATH  Google Scholar 

  • Glänzel, W. (2008). On some new bibliometric applications of statistics related to the h-index. Scientometrics, 77(1), 187–196.

    Article  Google Scholar 

  • Glanzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193.

    Article  Google Scholar 

  • Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87(2), 415–424.

    Article  Google Scholar 

  • Hagström, W. O. (1965). The scientific community. New York: Basic Books.

    Google Scholar 

  • Hicks, D. (2004). The four literatures of social science. In H. Moed, W. Glanzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research. New York: Kluwer Academic.

    Google Scholar 

  • Hirsch, J. E. (2007). Does the h index have predictive power? Proceedings of the National Academy of Sciences, 104(49), 19193–19198.

    Article  Google Scholar 

  • Hoeffel, C. (1998). Journal impact factors. Allergy, 53, 1225.

    Article  Google Scholar 

  • Hönekopp, J., & Khan, J. (2011). Future publication success in science is better predicted by traditional measures than by the h index. Scientometrics, 90(3), 843–853.

    Article  Google Scholar 

  • Ingwersen, P., Larsen, B., Rousseau, R., & Russell, J. (2001). The publication-citation matrix and its derived quantities. Chinese Science Bulletin, 46(6), 524–528.

    Article  Google Scholar 

  • Katz, S. J. (1999). The self-similar science system. Research policy, 28(5), 501–517.

    Article  Google Scholar 

  • Leydesdorff, L., & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61(11), 2365–2396.

    Article  Google Scholar 

  • Lundberg, J. (2007). Lifting the crown: citation z-score. Journal of Informetrics, 1(2), 145–154.

    Article  Google Scholar 

  • Luukkonen, T. (1997). Why has Latour’s theory of citations been ignored by the bibliometric community? Discussion of sociological interpretations of citation analysis. Scientometrics, 38(1), 27–37.

    Article  MathSciNet  Google Scholar 

  • Marchant, T. (2009). An axiomatic characterization of the ranking based on the h-index and some other bibliometric rankings of authors. Scientometrics, 80(2), 325–342.

    Article  Google Scholar 

  • Marshakova-Shaikevich, I. (1996). The standard impact factor as an evaluation tool of science fields and scientific journals. Scientometrics, 35(2), 283–290.

    Article  Google Scholar 

  • Merton, R.K. (1942). Science and technology in a democratic order. Journal of legal and political sociology, 1, 115–126 (reprint: The normative structure of science (1973). In Storer N.W. (ed.), The sociology of science: theoretical and empirical investigations (pp. 1267–1278). Chicago: University of Chicago Press).

  • Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.

    Article  Google Scholar 

  • Moed, H. F., & van Leeuwen, T. N. (1995). Improving the accuracy of institute for scientific information’s journal impact factors. Journal of the American Society for Information Science, 46(6), 461–467.

    Article  Google Scholar 

  • Moed, H. F., & Vriens, M. (1989). Possible inaccuracies occurring in citation analysis. Journal of Information Science, 15, 95–107.

    Article  Google Scholar 

  • Murugesan, P., & Moravcsik, M. J. (1978). Variation of the nature of citation measures with journal and scientific specialties. Journal of the American Society for Information Science, 29(3), 141–155.

    Article  Google Scholar 

  • Narin, F. (1976). Evaluative bibliometrics : the use of publication and citation analysis in the evaluation of scientific activity (Report prepared for the National Science Foundation, Contract NSF C-627). Cherry Hill: Computer Horizons.

    Google Scholar 

  • Nicolaisen, J., & Frandsen, T. F. (2008). The reference return ratio. Journal of Informetrics, 2(2), 128–135. doi:10.1016/j.joi.2007.12.001.

    Article  Google Scholar 

  • Palacios Huerta, I., & Volij, O. (2004). The Measurement of intellectual influence. Econometrica, 72(3), 963–977.

    Article  MATH  Google Scholar 

  • Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: theory, with application to the literature of physics. Information Processing and Management, 12, 297–312.

    Article  Google Scholar 

  • Raddichi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: towards an objective measure of citation impact. Proceedings of the National Academy of Sciences, 105(45), 17268–17272.

    Article  Google Scholar 

  • Ramanana-Rahary, S., Zitt, M., & Rousseau, R. (2009). Aggregation properties of relative impact and other classical indicators: convexity issues and the Yule-Simpson paradox. Scientometrics, 79(1–2), 311–327.

    Article  Google Scholar 

  • Rousseau, R. (2008). Woeginger’s axiomatisation of the h-index and its relation to the g-index, the h(2)-index and the r2-index. Journal of Informetrics, 2(4), 335–340.

    Article  Google Scholar 

  • Rousseau, R., & Egghe, L. (2003). A general framework for relative impact indicators. Canadian Journal of Information and Library Science, 27(1), 29–48.

    Google Scholar 

  • Schubert, A., & Braun, T. (1986). Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics, 9(5–6), 281–291.

    Article  Google Scholar 

  • Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43, 628–638.

    Article  Google Scholar 

  • Sen, B. K. (1992). Documentation Note Normalized Impact Factor. Journal of Documentation, 48(3), 318–325.

    Article  Google Scholar 

  • Small, H., & Sweeney, E. (1985). Clustering the science citation index using co-citations: 1. A comparison of methods. Scientometrics, 7(3–6), 391–409.

    Article  Google Scholar 

  • Van Raan, A. F. J. (2000). On growth, ageing, and fractal differentiation of science. Scientometrics, 47(2), 347–362.

    Article  Google Scholar 

  • van Raan, A. F. J. (2001). Competition amongst scientists for publication status: toward a model of scientific publication and citation distributions. Scientometrics, 51(1), 347–357.

    Article  Google Scholar 

  • Vanclay, J. K. (2009). Bias in the journal impact factor. Scientometrics, 78(1), 3–12.

    Article  Google Scholar 

  • Vanclay, J. K. (2011). Impact Factor: outdated artefact or stepping-stone to journal certification? Scientometrics,. doi:10.1007/s11192-011-0561-0.

    Google Scholar 

  • Vieira, E. S., & Gomez, J. A. N. F. (2011). The journal relative impact: an indicator for journal assessment. Scientometrics, 89(2), 631–651.

    Article  Google Scholar 

  • Vinkler, P. (2002). Subfield problems in applying the Garfield (impact) factors in practice. Scientometrics, 53(2), 267–279.

    Article  Google Scholar 

  • Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. J. (2011). Towards a new crown indicator: some theoretical considerations. Journal of Informetrics, 5, 37–47.

    Article  Google Scholar 

  • Waltman, L., & van Eck, N. J. (2009). A taxonomy of bibliometric performance indicators based on the property of consistency. Proceedings of the 12th International Conference on Scientometrics and Informetrics, 1002–1003.

  • Wouters, P. (1997). Citation cycles and peer review cycles. Scientometrics, 38(1), 39–55.

    Article  Google Scholar 

  • Zitt, M. (2010). Citing-side normalization of journal impact: a robust variant of the audience factor. Journal of Informetrics, 4(3), 392–406.

    Article  Google Scholar 

  • Zitt, M. (2011). Behind citing-side normalization of citations: some properties of the journal impact factor. Scientometrics, 89(1), 329–344.

    Article  MathSciNet  Google Scholar 

  • Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2003). Correcting glasses help fair comparisons in international science landscape: country indicators as a function of ISI database delineation. Scientometrics, 56(2), 259–282.

    Article  Google Scholar 

  • Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2005). Relativity of citation performance and excellence measures: from cross-field to cross-scale effects of field-normalisation. Scientometrics, 63(2), 373–401.

    Article  Google Scholar 

  • Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: the audience factor. Journal of the American Society for Information Science and Technology, 59(11), 1856–1860.

    Article  Google Scholar 

  • Zucker, L. G., & Darby, M. R. (1996). Star scientists and institutional transformation: patterns of invention and innovation in the formation of the biotechnology industry. Proceedings of the National Academy of Sciences, 93(23), 12709–12716.

    Article  Google Scholar 

Download references

Acknowledgments

The author thanks S. Ramanana-Rahary and E. Bassecoulard for their help.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michel Zitt.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zitt, M. The journal impact factor: angel, devil, or scapegoat? A comment on J.K. Vanclay’s article 2011. Scientometrics 92, 485–503 (2012). https://doi.org/10.1007/s11192-012-0697-6

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-012-0697-6

Keywords

Navigation