Abstract
In the almost 40 years since we wrote Evaluative bibliometrics enormous advances have been made in data availability and analytic technique. The journal impact factor of the 1960s has clearly not kept up with the state of the art. However, for both old and new indicators, basic validity and relevance issues remain, such as by what standard can we validate our results, and what external use can appropriately be made of them? As funding support becomes more difficult, we should not lose sight of the necessity to again demonstrate the importance of our research, and must keep in mind that it is the relevance of our results that count, not the elegance of our mathematics.
Similar content being viewed by others
References
Derman, E. (2011). Models behaving badly. New York: Free Press.
Garfield, E. (1972, Nov, 3). Citation analysis as a tool in journal evaluation. Science, 178, 472–479.
Kochen, M. (1974). Principals of information retrieval. New York: Wiley.
Narin, F. (1976). Evaluative bibliometrics. New Jersey: Computer Horizons, Inc.
Pinski, G., & Narin, F. (1976). Citation influence for journal aggregates of scientific publications: theory, with applications to the literature of physics. Information Processing and Management, 12, 297.
Vanclay, J. K. (in press). Impact factor: Outdated artifact or stepping stone to journal certification? Scientometrics.
Weinberg, A. M. (1963). Criteria for scientific choice. Minerva, 1, 159–171.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Narin, F. Decades of progress, or the progress of decades?. Scientometrics 92, 391–393 (2012). https://doi.org/10.1007/s11192-012-0678-9
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-012-0678-9