Abstract
We examine the relationships between four citation metrics (impact factor, the numerator of the impact factor, article influence score, and eigenfactor) and the library journal selection decisions made by Manhattan College faculty as part of a large-scale serials review. Our results show that journal selection status (selected or not) is only weakly or moderately related to citation impact. Faculty choosing journals for their universities do consider the citation data provided to them, although they place less emphasis on citation impact than do faculty responding to journal ranking surveys. While previous research suggests that subjective journal ratings are more closely related to size-independent metrics (those that represent the average impact of an article rather than the impact of the journal as a whole) and weighted metrics (those that give more credit for citations in high-impact journals), our current results provide no support for the first assertion and only limited support for the second.
Notes
For the relevant report year (2016), NIF is the number of times the articles published in 2014 and 2015 were cited in 2016.
Quite a few of these authors mention the impact factor by name, but none mention particular citation metrics other than IF. It is not clear whether they are referring specifically to IF, as presented in JCR, or if they are using "impact factor" as a generic term for any citation metric.
Citation impact accounts for roughly half of the CDL-WVA score.
To arrive at the value of 0.314, we evaluated each of the JCR journals in the four subject areas most familiar to us, making yes or no decisions in order to estimate how many journals would be essential for the MC library collection (as a proportion of the total number included in JCR).
In a cover letter, we explained that EF "represents the citedness of the journal as a whole—all the articles published over the past 5 years." We noted that EF is especially useful for library collection development, since it accounts for both the citedness of an "average" article and the number of articles published in the journal. IF, an indicator more often used by authors deciding where to publish, was described as "the average citedness of a single article in the journal."
Point-biserial correlation is appropriate for evaluating the relationship between a continuous variable (the four citation metrics) and a binary or dichotomous variable (the faculty's yes/no selection decisions). Except as noted, all correlations and significance tests were undertaken in SPSS.
"Write-in" journals and journals from additional JCR categories were included in the serials review but excluded from this study. A subsequent paper will describe how the departmental journal lists were used in the selection of online databases and collections.
For Business Analytics and CIS, AI and EF are tied for first place; for Psychology, IF and EF are tied for first place. Our discussion of results is based on the assumption that all reported differences in correlation coefficients are valid for the set of 5756 JCR journals included in the analysis. The Sig and ns designations shown in Table 5 indicate whether each comparison is valid for a larger, hypothetical population of journals.
At Manhattan College, preliminary discussions with faculty suggested that some departments might have difficulty evaluating journals if more than one or two citation metrics were provided in the departmental spreadsheets—that the inclusion of additional metrics would be distracting rather than helpful. In hindsight, however, we feel that presenting up to four citation metrics is unlikely to pose problems.
References
Christenson, J. A., & Sigelman, L. (1985). Accrediting knowledge: Journal stature and citation impact in social science. Social Science Quarterly, 66(4), 964–975.
Cooper, D., Daniel, K., Bakker, C., Blanck, J., Childs, C., Gleason, A., et al. (2017). Supporting the changing research practices of public health scholars. New York: Ithaka S + R. https://doi.org/10.18665/sr.305867.
Currie, R. R., & Pandher, G. S. (2011). Finance journal rankings and tiers: An active scholar assessment methodology. Journal of Banking & Finance, 35(1), 7–20.
Davis, P. M. (2002). The effect of the web on undergraduate citation behavior: A 2000 update. College & Research Libraries, 63(1), 53–60.
Davis, P. M. (2003). Effect of the web on undergraduate citation behavior: Guiding student scholarship in a networked age. Portal: Libraries and the Academy, 3(1), 41–51.
Davis, P. M., & Cohen, S. A. (2001). The effect of the web on undergraduate citation behavior, 1996–1999. Journal of the American Society for Information Science and Technology, 52(4), 309–314.
Dawson, M., & Rascoff, M. (2006). Scholarly communications in the economics discipline. New York: Ithaka S + R. https://doi.org/10.18665/sr.22340.
Díaz-Ruíz, A., Orbe-Arteaga, U., Ríos, C., & Roldan-Valadez, E. (2018). Alternative bibliometrics from the web of knowledge surpasses the impact factor in a 2-year ahead annual citation calculation: Linear mixed-design models’ analysis of neuroscience journals. Neurology India, 66(1), 96–104.
Ellis, L. V., & Durden, G. C. (1991). Why economists rank their journals the way they do. Journal of Economics and Business, 43(3), 265–270.
Haddawy, P., Hassan, S.-U., Asghar, A., & Amin, S. (2016). A comprehensive examination of the relation of three citation-based journal metrics to expert judgment of journal quality. Journal of Informetrics, 10(1), 162–173.
Harley, D., Acord, S. K., Earl-Novell, S., Lawrence, S., & King, C. J. (2010). Assessing the future landscape of scholarly communication: An exploration of faculty values and needs in seven disciplines. Berkeley, CA: Center for Studies in Higher Education. Retrieved November 1, 2018, from https://escholarship.org/uc/item/15x7385g.
He, C., & Pao, M. L. (1986). A discipline-specific journal selection algorithm. Information Processing and Management, 22(5), 405–416.
Knowlton, S. A., Sales, A. C., & Merriman, K. W. (2014). A comparison of faculty and bibliometric valuation of serials subscriptions at an academic research library. Serials Review, 40(1), 28–39.
Lenhard, W., & Lenhard, A. (2014). Testing the significance of correlations. Bibergau: Psychometrika. Retrieved November 1, 2018, from https://www.psychometrica.de/correlation.html.
Long, M. P., & Schonfeld, R. C. (2013). Supporting the changing research practices of chemists. New York: Ithaka S + R. https://doi.org/10.18665/sr.22561.
Maron, N. L., & Smith, K. K. (2008). Current models of digital scholarly communication. New York: Ithaka S + R. https://doi.org/10.18665/sr.22348.
Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P. A., & Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLOS Biology, 16(3), 2004089. https://doi.org/10.1371/journal.pbio.2004089.
Nicholas, D., Watkinson, A., Boukacem-Zeghmouri, C., Rodríguez-Bravo, B., Xu, J., Abrizah, A., et al. (2017). Early career researchers: Scholarly behaviour and the prospect of change. Learned Publishing, 30(2), 157–166.
Quinn, M., & Kim, J. (2007). Scholarly communications in the biosciences discipline. New York: Ithaka S + R. https://doi.org/10.18665/sr.22344.
Roldan-Valadez, E., Orbe-Arteaga, U., & Ríos, C. (2018). Eigenfactor score and alternative bibliometrics surpass the impact factor in a 2-years ahead annual-citation calculation: A linear mixed design model analysis of radiology, nuclear medicine and medical imaging journals. La Radiologia Medica, 123(7), 524–534.
Rousseau, R., Egghe, L., & Guns, R. (2018). Journal citation analysis. In Becoming metric-wise: A bibliometric guide for researchers (pp. 155–199). Cambridge, MA: Chandos Publishing.
Rowley, J., Johnson, F., Sbaffi, L., Frass, W., & Devine, E. (2017). Academics’ behaviors and attitudes towards open access publishing in scholarly journals. Journal of the Association for Information Science and Technology, 68(5), 1201–1211.
Saarela, M., Kärkkäinen, T., Lahtonen, T., & Rossi, T. (2016). Expert-based versus citation-based ranking of scholarly and scientific publication channels. Journal of Informetrics, 10(3), 693–718.
Schimanski, L. A., & Alperin, J. P. (2018). The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future. F1000Research. https://doi.org/10.12688/f1000research.16493.1.
Serenko, A., & Bontis, N. (2011). What’s familiar is excellent: The impact of exposure effect on perceived journal quality. Journal of Informetrics, 5(1), 219–223.
Serenko, A., & Bontis, N. (2018). A critical evaluation of expert survey-based journal rankings: The role of personal research interests. Journal of the Association for Information Science and Technology, 69(5), 749–752.
Singleton, A. (2010). Why usage is useless. Learned Publishing, 23(3), 179–184.
Sugimoto, C. R., Work, S., Larivière, V., & Haustein, S. (2017). Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology, 68(9), 2037–2062.
Tahai, A., & Meyer, M. J. (1999). A revealed preference study of management journals’ direct influences. Strategic Management Journal, 20(3), 279–296.
Tu, C., & Worzala, E. (2010). The perceived quality of real estate journals: Does your affiliation matter? Property Management, 28(2), 104–121.
Walters, W. H. (2016a). Beyond use statistics: Recall, precision, and relevance in the assessment and management of academic libraries. Journal of Librarianship and Information Science, 48(4), 340–352.
Walters, W. H. (2016b). Evaluating online resources for college and university libraries: Assessing value and cost based on academic needs. Serials Review, 42(1), 10–17.
Walters, W. H. (2017a). Citation-based journal rankings: Key questions, metrics, and data sources. IEEE Access, 5, 22036–22053.
Walters, W. H. (2017b). Do subjective journal ratings represent whole journals or typical articles? Unweighted or weighted citation impact? Journal of Informetrics, 11(3), 730–744.
Walters, W. H. (2017c). Key questions in the development and use of survey-based journal rankings. Journal of Academic Librarianship, 43(4), 305–311.
Waltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365–391.
Wolff, C., Rod, A. B., & Schonfeld, R. C. (2016a). UK survey of academics 2015. New York: Ithaka S + R. https://doi.org/10.18665/sr.282736.
Wolff, C., Rod, A. B., & Schonfeld, R. C. (2016b). US faculty survey 2015. New York: Ithaka S + R. https://doi.org/10.18665/sr.277685.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Walters, W.H., Markgren, S. Do faculty journal selections correspond to objective indicators of citation impact? Results for 20 academic departments at Manhattan College. Scientometrics 118, 321–337 (2019). https://doi.org/10.1007/s11192-018-2972-7
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-018-2972-7