Skip to main content
Log in

Behind citing-side normalization of citations: some properties of the journal impact factor

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

A new family of citation normalization methods appeared recently, in addition to the classical methods of “cited-side” normalization and the iterative measures of intellectual influence in the wake of Pinski and Narin influence weights. These methods have a quite global scope in citation analysis but were first applied to the journal impact, in the experimental Audience Factor (AF) and the Scopus Source-Normalized Impact per Paper (SNIP). Analyzing some properties of the Garfield’s Journal Impact Factor, this note highlights the rationale of citing-side (or source-level, fractional citation, ex ante) normalization.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. The reader may refer to classical reviews on impact such as Glänzel and Moed ( 2002). Among the first references on classical normalization one can find Murugesan and Moravcsik (1978), Schubert and Braun (1986), Sen (1992) and Czapski (1997) on rank approaches. See also Vinkler (2004).

  2. Preliminary results were presented at the 11th International Conference on Science and Technology Indicators conference, Leiden, Sept 2010.

  3. The original ISI-Thomson Reuters' Journal Impact Factor calculation presents a flaw in the consistency of literature considered at the numerator which includes types of documents not present in the denominator (Moed et al. 1996). This is distinct from the case where citing and cited literature are each consistently defined, but on different filters.

  4. The assumption that references lists are shorter in average in proceedings (such as the fake WCP here) than in the previous literature, likely to be dominated by articles, could be easily expressed by a correcting factor on q.

  5. another case comes from editorial constraints on the number of references, which lead to very short lists, resulting however from an academic selection process – contrarily to trade journals.

  6. Strictly with respect to journal-based classification (old Thomson-SCI classification, Thomson ESI). Current SCI classification goes further than the journal-level.

  7. A point of terminology: the new approach was introduced under the name of citing-side or fractional citation normalization, referring to a general principle, the reversal of perspective on field normalization, with examples of particular applications. In other works cited above one finds source-level, a priori, fractionated citation normalization. Promoters of methods may wish to either consider those terms as synonymous to the principle above, or reserve those names for a particular methodology. Clearly, terms such as Audience Factor, SNIP, MCNSC… are associated with specific calculation protocols.

  8. In the case of the Audience Factor, the critique firstly bore on an inessential aspect, the final scale correction of the AF by the "all science" value. The purpose of this single coefficient is to make the values of the Audience Factor and the Impact Factor directly comparable, and does not introduce mediant fractions. Then, the critique applies an inappropriate rationale of random samples to the WoS population which, in first approximation, proceeds from a Bradfordian selection, except for the low tail. A real problem however is the arbitrary length of this low tail in database producers' decision, which particularly affects the publication and impact measures (Zitt et al. 2003), whatever the “side” of normalization. Influence measures may be less sensitive to this issue.

References

  • Bar-Ilan J. (2009). A closer look at the sources of informetric research. International Journal of Scientometrics, Informetrics and Bibliometrics, 13(1), paper 4. http://www.cindoc.csic.es/cybermetrics/articles/v13i11p14.html.

  • Bergstrom, C. (2007). Eigenfactor: measuring the value and prestige of scholarly journals. College and Research Libraries News, 68(5). http://www.ala.org/ala/acrl/acrlpubs/crlnews/backissues2007/may2007/eigenfactor.cfm.

  • Cronin, B. (1984). The citation process; the role and significance of citations in scientific communication. London: Taylor Graham.

    Google Scholar 

  • Czapski, G. (1997). The use of deciles of the citation impact to evaluate different fields of research in Israel. Scientometrics, 40(3), 437–443.

    Article  Google Scholar 

  • Garfield, E. (1972). Citation analysis as a tool in Journal Evaluation. Science, 178(4060), 471–479.

    Article  Google Scholar 

  • Garfield, E. (1976). Is the ratio between number of citations and publications cited a true constant? Current Contents, 8(6), 5–7.

    Google Scholar 

  • Garfield, E. (1979). Citation indexing: Its theory and application in science, technology and humanities. New York: Wiley.

    Google Scholar 

  • Garfield, E. (1998). Random thoughts on citationology. Its theory and practice—Comments on theories of citation? Scientometrics, 43(1), 69–76.

  • Glänzel, W. (2004). Towards a model for diachronous and synchronous citation analyses. Scientometrics, 60(3), 511–522.

    Article  Google Scholar 

  • Glänzel, W., & Moed, H. F. (2002). Journal impact measures in bibliometric research. Scientometrics, 53(2), 171–193.

    Article  Google Scholar 

  • Glänzel, W., Schubert, A., Thijs, B., & Debackere, K. (2011). A priori vs. a posteriori normalisation of citation indicators. The case of journal ranking. Scientometrics, 87(2), 415–424.

    Google Scholar 

  • Ingwersen, P., Larsen, B., Rousseau, R., Russell, J. (2001). The publication-citation matrix and its derived quantities. Chinese Science Bulletin, 46(6), 524–528.

    Google Scholar 

  • Irvine, J., & Martin, B. R. (1989). International comparisons of scientific performance revisited. Scientometrics, 15(5–6), 369–392.

    Article  Google Scholar 

  • Leydesdorff, L., & Opthof, T. (2010). Scopus’s source normalized impact per paper (SNIP) versus a journal impact factor based on fractional counting of citations. Journal of the American Society for Information Science and Technology, 61(11), 2365–2396.

    Article  Google Scholar 

  • Leydesdorff, L., & Shin, J. C. (2011). How to evaluate universities in terms of their relative citation impacts: Fractional counting of citations and the normalization of differences among disciplines. Journal of the American Society for Information Science and Technology, 62(6), 1146–1155.

    Google Scholar 

  • Luukkonen, T. (1997). Why has Latour’s theory of citations been ignored by the bibliometric community? Discussion of sociological interpretations of citation analysis. Scientometrics, 38(1), 27–37.

    Article  MathSciNet  Google Scholar 

  • Moed, H. F., de Bruin, R. E., & van Leeuwen, T. (2010). Measuring contextual citation impact of scientific journals. Journal of Informetrics, 4(3), 265–277.

    Article  Google Scholar 

  • Moed, H. F., Van Leeuwen, T. N., & Reedijk, J. (1996). A critical analysis of the journal impact factors of Angewandte Chemie and the Journal of the American Chemical Society—Inaccuracies in published impact factors based on overall citations only. Scientometrics, 37(1), 105–116.

    Article  Google Scholar 

  • Moya-Anegon, F. (2007). SCImago. SJR—SCImago Journal and Country Rank. Retrieved Feb 1, 2010 from http://www.scimagojr.com.

  • Murugesan, P., & Moravcsik, M. J. (1978). Variation of the nature of citation measures with journal and scientific specialties. Journal of the American Society for Information Science, 29, 141–155.

    Article  Google Scholar 

  • Nicolaisen, J., & Frandsen, T. F. (2008). The reference return ratio. Journal of Informetrics, 2(2), 128–135.

    Article  Google Scholar 

  • Palacio-Huerta, I., & Volij, O. (2004). The measurement of intellectual influence. Econometrica, 72(3), 963–977.

    Article  Google Scholar 

  • Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: theory, with application to the literature of physics. Information Processing and Management, 12, 297–312.

    Article  Google Scholar 

  • Ramanana-Rahary, S., Zitt, M., & Rousseau, R. (2009). Aggregation properties of relative impact and other classical indicators: Convexity issues and the Yule-Simpson paradox. Scientometrics, 79(1–2), 311–327.

    Article  Google Scholar 

  • Rousseau, R. (1997). The proceedings of the first and second international conferences on bibliometrics, scientometrics and informetrics: A data analysis. In: B. Peritz & L. Egghe (Eds.), Proceedings of the sixth conference of the International Society for Scientometrics and Informetrics, (pp. 371–380). Jerusalem: Hebrew University of Jerusalem.

  • Schubert, A., & Braun, T. (1986). Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics, 9(5–6), 281–291.

    Article  Google Scholar 

  • Sen, B. K. (1992). Normalized impact factor. Journal of Documentation, 48(3), 318–325.

    Article  Google Scholar 

  • Small, H., & Sweeney, E. (1985). Clustering the Science Citation Index using co-citations I - a comparison of methods. Scientometrics, 7(3–6), 391–409.

    Article  Google Scholar 

  • Vinkler, P. (2004). Characterization of the impact of sets of scientific papers: The Garfield (Impact) Factor. Journal of the American Society for Information Science and Technology, 55(5), 431–435.

    Article  Google Scholar 

  • Waltman, L., & van Eck, N. J. (2010). A general source-normalized approach to bibliometric research performance assessment. Paper presented at the 11th International Conference on Science and Technology Indicators. Leiden.

  • Zitt, M. (2010). Citing-side normalization of journal impact: a robust variant of the Audience Factor. Journal of Informetrics, 4(3), 392–406.

    Article  Google Scholar 

  • Zitt M., Ramanana-Rahary S., & Bassecoulard, E. (2003). Correcting glasses help fair comparisons in international landscape: Country indicators as a function of ISI database delineation. Scientometrics, 56(2), 259–282.

    Google Scholar 

  • Zitt, M., Ramanana-Rahary, S., & Bassecoulard, E. (2005). Relativity of citation performance and excellence measures: From cross-field to cross-scale effects of field-normalisation. Scientometrics, 63(2), 373–401.

    Article  Google Scholar 

  • Zitt, M., & Small, H. (2008). Modifying the Journal Impact Factor by Fractional Citation Weighting: the Audience Factor. Journal of the American Society for Information Science and Technology (JASIST), 59(11), 1856–1860.

    Article  Google Scholar 

Download references

Acknowledgments

The author is indebted to Wolfgang Glänzel, Henry Small, Ronald Rousseau, Pierre-André Zitt for helpful discussion and suggestions. Errors remain the responsibility of the author.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Zitt.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zitt, M. Behind citing-side normalization of citations: some properties of the journal impact factor. Scientometrics 89, 329–344 (2011). https://doi.org/10.1007/s11192-011-0441-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-011-0441-7

Keywords

Navigation