Abstract
J.K. Vanclay’s article is a bold attempt to review recent works on the journal impact factor (JIF) and to call for alternative certifications of journals. The too broad scope did not allow the author to fulfill all his purposes. Attempting after many others to organize the various forms of criticism, with targets often broader than the JIF, we shall try to comment on a few points. This will hopefully enable us to infer in which cases the JIF is an angel, a devil, or a scapegoat. We shall also expand on a crucial question that Vanclay could not really develop in the reduced article format: the field-normalization. After a short recall on classical cited-side or ex post normalization and of the powerful influence measures, we will devote some attention to the novel way of citing-side or ex ante normalization, not only for its own interest, but because it directly proceeds from the disassembling of the JIF clockwork.
Similar content being viewed by others
Notes
JKV in the followings.
For example in the category "distribution", one might infer that the non-normality is a recent finding, whereas it is the basis of one of the most cited papers about the JIF (Seglen) put in another category. Fortunately, some older references resisted the filtering.
Individual-level productivity modeling is now partly directed towards the predictive capability of the h-index (Hirsch 2007), a matter of controversy (Hönekopp and Khan 2011). In certain areas (biology), the presence of star scientists is not only held as a predictor of future scientific success but also of industrial success and radical changes (in biology: Zucker and Darby 1996).
Four conjectures on impact factors (Jacques Ninio)Each statistical indicator entails biases which need to be identified and corrected. However, in the evaluation of scientific research, popularity tests are used as substitutes for quality tests, a practice which penalizes our most original productions.Conjecture 1: The impact factor of a journal is directly correlated with the incompetence of those who cite it [this argument against multidisciplinary journals]Conjecture 2: Impact factors are directly correlated with lack of originality. Conjecture 3: The impact factor of a journal is inversely correlated with the longevity of the articles it publishes. Conjecture 4: The impact factor of a journal is directly correlated with its rate of fraudulent articles. Source: www.lps.ens.fr/~ninio/impact-factor.htm.
In Adler, Ewing and Taylor, Report of the International Mathematical Union (Adler et al. 2008; see also Adler et al. 2009): “While numbers appear to be ‘objective’, their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. The sole reliance on citation data provides at best an incomplete and often shallow understanding of research—an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments.”
Source: http://www.mathunion.org/fileadmin/IMU/Report/CitationStatistics.pdf.
A classical way to express this is to see an actor’s real impact as the product of the “expected impact” (the actor’s impact factor, only depending on journals of publication) and the “relative citation rate” (RCR, Schubert and Braun 1986), particular form of journal-level normalized indicator with neutral value 1 (non-unique decomposition). A RCR type indicator, at another level normalization (fields), is the CWTS “crown indicator” in its original form.
At least do discourage copy-paste and blunt forms of plagiarism. Sophisticated forms may trigger a race, in computational linguistics applications, between anti-plagiarism tools and text transformation tools.
Quite difficult to interpret, however, if only because of authors' anticipations.
OST Paris made this choice, for example, in 1992, after other producers.
On a systematic treatment of these differences, see for example Ingwersen et al. (2001).
The statistical relation of the h-index to JIF and size is studied in Glänzel and Schubert (2007).
Bibliometric usage tends to include note and letters, depending on the database, as citable documents. Especially for border types, the qualification of a given document may differ between the journal publisher and the databases and among databases.
The selection process used to be criticized for a bias against non-English speaking journals, European journals, and emerging journals. The situation seems to have improved, due to competition perhaps, and also to the fact that European and non-mainstream journals of good quality turned to English and international openness. Another limitation, quite difficult to cope with, is the possible lack of coverage of small fields in applied science, often importers of knowledge and not well seen from other specialties.
For an actor or a journal i in a field J, the quotient of the actors’ impact to the reference’s (e.g., world) impact.
This solution, in addition to the relative impacts calculated at each level, was implemented by S. Ramanana-Rahary at OST.
"Fractional citations" were applied by Small and Sweeney (1985) to the metrics of co-citation mapping not to be confused with the fractional count of citations to multiple co-authors in a paper. To our best knowledge, the mention of citing-side normalization for impact calculation, not embedded in influence flows, appears in Zitt et al. (2005 op. cit).
The only one cited by JKV. This is all the more regrettable that the papers by Leydesdorff on the topic, so far, show a slightly subjective view of the history of the research front.
If those terms citing-side, fractional, source-level, etc., are equivalent in terms of principle, they were often coined along with particular methodological choices.
In the original Audience Factor, the journal level is used for both on citing and cited-side. Emitted citations in the citing journal are weighted in inverse proportion of the average length of bibliographies in this journal's articles. Most other developments use a finer granularity on the citing-side.
.
References
Adams, J., Gurney, K., & Marshall, S. (2007). Profiling citation impact: a new methodology. Scientometrics, 72(2), 325–344.
Adler, R., Ewing, J., & Taylor, P. (2008). Citation statistics. A report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the International Council of Institute of Mathematical Statistics (IMS). Summarized in (2009). Statistical Science, 24(1), 1–14.
Bar-Ilan, J. (2008). Which h-index? – A comparison of WoS, Scopus and Google Scholar. Scientometrics, 74(2), 257–271.
Bergstrom, C. (2007). Eigenfactor: measuring the value and prestige of scholarly journals. College & Research Libraries News, 68, 5, www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/may2007/eigenfactor.cfm.
Bollen, J., Van de Sompel, H., Hagberg, A., & Chute, R. (2009). A principal component analysis of 39 scientific impact measures. PLoS ONE, 4(6), e6022. doi:10.1371/journal.pone.0006022.
Bourdieu, P. (1975). The specificity of the scientific field and the social conditions of the progress of reason. Social Science Information, 14(6), 19–47.
Bouyssou, D., & Marchant, T. (2011). Bibliometric rankings of journals based on impact factors: an axiomatic approach. Journal of Informetrics, 5(1), 75–86. doi:10.1016/j.joi.2010.09.001.
Braun, T., Glänzel, W., & Schubert, A. (2006). A Hirsch-type index for journals. Scientometrics, 69, 169–173.
Callon, M., & Latour, B. (1981). Unscrewing the big leviathan: how actors macrostructure reality and how sociologists help them to do so. In Karin. D. Knorr Cetina & Aaraon. V. Cicourel (Eds.), Advances in social theory and methodology: toward an integration of micro- and macro-sociologies (pp. 277–303). Boston: Routledge and Kegan Paul.
Cronin, B. (1984). The citation process; the role and significance of citations in scientific communication. London: Taylor Graham.
Czapski, G. (1997). The use of deciles of the citation impact to evaluate different fields of research in Israel. Scientometrics, 40(3), 437–443.
de Moya-Anegon, F. (2007). SCImago. SJR — SCImago Journal & Country Rank.
de Solla Price, D. J. (1963). Little science, big science. New York: Columbia University Press.
Garfield, E. (1955). Citation Indexes for Science. A new dimension in documentation through association of ideas. Science, 122, 108–111.
Garfield, E. (1972). Citation analysis as a tool in journal evaluation. Science, 178(4060), 471–479.
Garfield, E. (2006). The history and meaning of the journal impact factor. Journal of the American Medical Association, 295, 90–93.
Garfield, E., & Sher, I. H. (1963). New factors in the evaluation of scientific literature through citation indexing. American Documentation, 14(3), 195–201.
Geller, N. L. (1978). Citation influence methodology of Pinski and Narin. Information Processing and Management, 14, 93–95.
Glänzel, W. (2008). On some new bibliometric applications of statistics related to the h-index. Scientometrics, 77(1), 187–196.
Glanzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193.
Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87(2), 415–424.
Hagström, W. O. (1965). The scientific community. New York: Basic Books.
Hicks, D. (2004). The four literatures of social science. In H. Moed, W. Glanzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research. New York: Kluwer Academic.
Hirsch, J. E. (2007). Does the h index have predictive power? Proceedings of the National Academy of Sciences, 104(49), 19193–19198.
Hoeffel, C. (1998). Journal impact factors. Allergy, 53, 1225.
Hönekopp, J., & Khan, J. (2011). Future publication success in science is better predicted by traditional measures than by the h index. Scientometrics, 90(3), 843–853.
Ingwersen, P., Larsen, B., Rousseau, R., & Russell, J. (2001). The publication-citation matrix and its derived quantities. Chinese Science Bulletin, 46(6), 524–528.
Katz, S. J. (1999). The self-similar science system. Research policy, 28(5), 501–517.
Leydesdorff, L., & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61(11), 2365–2396.
Lundberg, J. (2007). Lifting the crown: citation z-score. Journal of Informetrics, 1(2), 145–154.
Luukkonen, T. (1997). Why has Latour’s theory of citations been ignored by the bibliometric community? Discussion of sociological interpretations of citation analysis. Scientometrics, 38(1), 27–37.
Marchant, T. (2009). An axiomatic characterization of the ranking based on the h-index and some other bibliometric rankings of authors. Scientometrics, 80(2), 325–342.
Marshakova-Shaikevich, I. (1996). The standard impact factor as an evaluation tool of science fields and scientific journals. Scientometrics, 35(2), 283–290.
Merton, R.K. (1942). Science and technology in a democratic order. Journal of legal and political sociology, 1, 115–126 (reprint: The normative structure of science (1973). In Storer N.W. (ed.), The sociology of science: theoretical and empirical investigations (pp. 1267–1278). Chicago: University of Chicago Press).
Moed, H. F. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.
Moed, H. F., & van Leeuwen, T. N. (1995). Improving the accuracy of institute for scientific information’s journal impact factors. Journal of the American Society for Information Science, 46(6), 461–467.
Moed, H. F., & Vriens, M. (1989). Possible inaccuracies occurring in citation analysis. Journal of Information Science, 15, 95–107.
Murugesan, P., & Moravcsik, M. J. (1978). Variation of the nature of citation measures with journal and scientific specialties. Journal of the American Society for Information Science, 29(3), 141–155.
Narin, F. (1976). Evaluative bibliometrics : the use of publication and citation analysis in the evaluation of scientific activity (Report prepared for the National Science Foundation, Contract NSF C-627). Cherry Hill: Computer Horizons.
Nicolaisen, J., & Frandsen, T. F. (2008). The reference return ratio. Journal of Informetrics, 2(2), 128–135. doi:10.1016/j.joi.2007.12.001.
Palacios Huerta, I., & Volij, O. (2004). The Measurement of intellectual influence. Econometrica, 72(3), 963–977.
Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: theory, with application to the literature of physics. Information Processing and Management, 12, 297–312.
Raddichi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: towards an objective measure of citation impact. Proceedings of the National Academy of Sciences, 105(45), 17268–17272.
Ramanana-Rahary, S., Zitt, M., & Rousseau, R. (2009). Aggregation properties of relative impact and other classical indicators: convexity issues and the Yule-Simpson paradox. Scientometrics, 79(1–2), 311–327.
Rousseau, R. (2008). Woeginger’s axiomatisation of the h-index and its relation to the g-index, the h(2)-index and the r2-index. Journal of Informetrics, 2(4), 335–340.
Rousseau, R., & Egghe, L. (2003). A general framework for relative impact indicators. Canadian Journal of Information and Library Science, 27(1), 29–48.
Schubert, A., & Braun, T. (1986). Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics, 9(5–6), 281–291.
Seglen, P. O. (1992). The skewness of science. Journal of the American Society for Information Science, 43, 628–638.
Sen, B. K. (1992). Documentation Note Normalized Impact Factor. Journal of Documentation, 48(3), 318–325.
Small, H., & Sweeney, E. (1985). Clustering the science citation index using co-citations: 1. A comparison of methods. Scientometrics, 7(3–6), 391–409.
Van Raan, A. F. J. (2000). On growth, ageing, and fractal differentiation of science. Scientometrics, 47(2), 347–362.
van Raan, A. F. J. (2001). Competition amongst scientists for publication status: toward a model of scientific publication and citation distributions. Scientometrics, 51(1), 347–357.
Vanclay, J. K. (2009). Bias in the journal impact factor. Scientometrics, 78(1), 3–12.
Vanclay, J. K. (2011). Impact Factor: outdated artefact or stepping-stone to journal certification? Scientometrics,. doi:10.1007/s11192-011-0561-0.
Vieira, E. S., & Gomez, J. A. N. F. (2011). The journal relative impact: an indicator for journal assessment. Scientometrics, 89(2), 631–651.
Vinkler, P. (2002). Subfield problems in applying the Garfield (impact) factors in practice. Scientometrics, 53(2), 267–279.
Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. J. (2011). Towards a new crown indicator: some theoretical considerations. Journal of Informetrics, 5, 37–47.
Waltman, L., & van Eck, N. J. (2009). A taxonomy of bibliometric performance indicators based on the property of consistency. Proceedings of the 12th International Conference on Scientometrics and Informetrics, 1002–1003.
Wouters, P. (1997). Citation cycles and peer review cycles. Scientometrics, 38(1), 39–55.
Zitt, M. (2010). Citing-side normalization of journal impact: a robust variant of the audience factor. Journal of Informetrics, 4(3), 392–406.
Zitt, M. (2011). Behind citing-side normalization of citations: some properties of the journal impact factor. Scientometrics, 89(1), 329–344.
Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2003). Correcting glasses help fair comparisons in international science landscape: country indicators as a function of ISI database delineation. Scientometrics, 56(2), 259–282.
Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2005). Relativity of citation performance and excellence measures: from cross-field to cross-scale effects of field-normalisation. Scientometrics, 63(2), 373–401.
Zitt, M., & Small, H. (2008). Modifying the journal impact factor by fractional citation weighting: the audience factor. Journal of the American Society for Information Science and Technology, 59(11), 1856–1860.
Zucker, L. G., & Darby, M. R. (1996). Star scientists and institutional transformation: patterns of invention and innovation in the formation of the biotechnology industry. Proceedings of the National Academy of Sciences, 93(23), 12709–12716.
Acknowledgments
The author thanks S. Ramanana-Rahary and E. Bassecoulard for their help.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zitt, M. The journal impact factor: angel, devil, or scapegoat? A comment on J.K. Vanclay’s article 2011. Scientometrics 92, 485–503 (2012). https://doi.org/10.1007/s11192-012-0697-6
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-012-0697-6