Skip to main content
Log in

Do faculty journal selections correspond to objective indicators of citation impact? Results for 20 academic departments at Manhattan College

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

We examine the relationships between four citation metrics (impact factor, the numerator of the impact factor, article influence score, and eigenfactor) and the library journal selection decisions made by Manhattan College faculty as part of a large-scale serials review. Our results show that journal selection status (selected or not) is only weakly or moderately related to citation impact. Faculty choosing journals for their universities do consider the citation data provided to them, although they place less emphasis on citation impact than do faculty responding to journal ranking surveys. While previous research suggests that subjective journal ratings are more closely related to size-independent metrics (those that represent the average impact of an article rather than the impact of the journal as a whole) and weighted metrics (those that give more credit for citations in high-impact journals), our current results provide no support for the first assertion and only limited support for the second.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Notes

  1. For the relevant report year (2016), NIF is the number of times the articles published in 2014 and 2015 were cited in 2016.

  2. Quite a few of these authors mention the impact factor by name, but none mention particular citation metrics other than IF. It is not clear whether they are referring specifically to IF, as presented in JCR, or if they are using "impact factor" as a generic term for any citation metric.

  3. Citation impact accounts for roughly half of the CDL-WVA score.

  4. For more in-depth discussions of size-independent/dependent and weighted/unweighted metrics, see Rousseau et al. (2018), Walters (2017a), and Waltman (2016, section 8.3).

  5. To arrive at the value of 0.314, we evaluated each of the JCR journals in the four subject areas most familiar to us, making yes or no decisions in order to estimate how many journals would be essential for the MC library collection (as a proportion of the total number included in JCR).

  6. In a cover letter, we explained that EF "represents the citedness of the journal as a whole—all the articles published over the past 5 years." We noted that EF is especially useful for library collection development, since it accounts for both the citedness of an "average" article and the number of articles published in the journal. IF, an indicator more often used by authors deciding where to publish, was described as "the average citedness of a single article in the journal."

  7. Point-biserial correlation is appropriate for evaluating the relationship between a continuous variable (the four citation metrics) and a binary or dichotomous variable (the faculty's yes/no selection decisions). Except as noted, all correlations and significance tests were undertaken in SPSS.

  8. "Write-in" journals and journals from additional JCR categories were included in the serials review but excluded from this study. A subsequent paper will describe how the departmental journal lists were used in the selection of online databases and collections.

  9. For Business Analytics and CIS, AI and EF are tied for first place; for Psychology, IF and EF are tied for first place. Our discussion of results is based on the assumption that all reported differences in correlation coefficients are valid for the set of 5756 JCR journals included in the analysis. The Sig and ns designations shown in Table 5 indicate whether each comparison is valid for a larger, hypothetical population of journals.

  10. At Manhattan College, preliminary discussions with faculty suggested that some departments might have difficulty evaluating journals if more than one or two citation metrics were provided in the departmental spreadsheets—that the inclusion of additional metrics would be distracting rather than helpful. In hindsight, however, we feel that presenting up to four citation metrics is unlikely to pose problems.

References

  • Christenson, J. A., & Sigelman, L. (1985). Accrediting knowledge: Journal stature and citation impact in social science. Social Science Quarterly, 66(4), 964–975.

    Google Scholar 

  • Cooper, D., Daniel, K., Bakker, C., Blanck, J., Childs, C., Gleason, A., et al. (2017). Supporting the changing research practices of public health scholars. New York: Ithaka S + R. https://doi.org/10.18665/sr.305867.

    Google Scholar 

  • Currie, R. R., & Pandher, G. S. (2011). Finance journal rankings and tiers: An active scholar assessment methodology. Journal of Banking & Finance, 35(1), 7–20.

    Article  Google Scholar 

  • Davis, P. M. (2002). The effect of the web on undergraduate citation behavior: A 2000 update. College & Research Libraries, 63(1), 53–60.

    Article  MathSciNet  Google Scholar 

  • Davis, P. M. (2003). Effect of the web on undergraduate citation behavior: Guiding student scholarship in a networked age. Portal: Libraries and the Academy, 3(1), 41–51.

    Article  MathSciNet  Google Scholar 

  • Davis, P. M., & Cohen, S. A. (2001). The effect of the web on undergraduate citation behavior, 1996–1999. Journal of the American Society for Information Science and Technology, 52(4), 309–314.

    Article  Google Scholar 

  • Dawson, M., & Rascoff, M. (2006). Scholarly communications in the economics discipline. New York: Ithaka S + R. https://doi.org/10.18665/sr.22340.

    Google Scholar 

  • Díaz-Ruíz, A., Orbe-Arteaga, U., Ríos, C., & Roldan-Valadez, E. (2018). Alternative bibliometrics from the web of knowledge surpasses the impact factor in a 2-year ahead annual citation calculation: Linear mixed-design models’ analysis of neuroscience journals. Neurology India, 66(1), 96–104.

    Article  Google Scholar 

  • Ellis, L. V., & Durden, G. C. (1991). Why economists rank their journals the way they do. Journal of Economics and Business, 43(3), 265–270.

    Article  Google Scholar 

  • Haddawy, P., Hassan, S.-U., Asghar, A., & Amin, S. (2016). A comprehensive examination of the relation of three citation-based journal metrics to expert judgment of journal quality. Journal of Informetrics, 10(1), 162–173.

    Article  Google Scholar 

  • Harley, D., Acord, S. K., Earl-Novell, S., Lawrence, S., & King, C. J. (2010). Assessing the future landscape of scholarly communication: An exploration of faculty values and needs in seven disciplines. Berkeley, CA: Center for Studies in Higher Education. Retrieved November 1, 2018, from https://escholarship.org/uc/item/15x7385g.

  • He, C., & Pao, M. L. (1986). A discipline-specific journal selection algorithm. Information Processing and Management, 22(5), 405–416.

    Article  Google Scholar 

  • Knowlton, S. A., Sales, A. C., & Merriman, K. W. (2014). A comparison of faculty and bibliometric valuation of serials subscriptions at an academic research library. Serials Review, 40(1), 28–39.

    Article  Google Scholar 

  • Lenhard, W., & Lenhard, A. (2014). Testing the significance of correlations. Bibergau: Psychometrika. Retrieved November 1, 2018, from https://www.psychometrica.de/correlation.html.

  • Long, M. P., & Schonfeld, R. C. (2013). Supporting the changing research practices of chemists. New York: Ithaka S + R. https://doi.org/10.18665/sr.22561.

    Google Scholar 

  • Maron, N. L., & Smith, K. K. (2008). Current models of digital scholarly communication. New York: Ithaka S + R. https://doi.org/10.18665/sr.22348.

    Google Scholar 

  • Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P. A., & Goodman, S. N. (2018). Assessing scientists for hiring, promotion, and tenure. PLOS Biology, 16(3), 2004089. https://doi.org/10.1371/journal.pbio.2004089.

    Article  Google Scholar 

  • Nicholas, D., Watkinson, A., Boukacem-Zeghmouri, C., Rodríguez-Bravo, B., Xu, J., Abrizah, A., et al. (2017). Early career researchers: Scholarly behaviour and the prospect of change. Learned Publishing, 30(2), 157–166.

    Article  Google Scholar 

  • Quinn, M., & Kim, J. (2007). Scholarly communications in the biosciences discipline. New York: Ithaka S + R. https://doi.org/10.18665/sr.22344.

    Google Scholar 

  • Roldan-Valadez, E., Orbe-Arteaga, U., & Ríos, C. (2018). Eigenfactor score and alternative bibliometrics surpass the impact factor in a 2-years ahead annual-citation calculation: A linear mixed design model analysis of radiology, nuclear medicine and medical imaging journals. La Radiologia Medica, 123(7), 524–534.

    Article  Google Scholar 

  • Rousseau, R., Egghe, L., & Guns, R. (2018). Journal citation analysis. In Becoming metric-wise: A bibliometric guide for researchers (pp. 155–199). Cambridge, MA: Chandos Publishing.

  • Rowley, J., Johnson, F., Sbaffi, L., Frass, W., & Devine, E. (2017). Academics’ behaviors and attitudes towards open access publishing in scholarly journals. Journal of the Association for Information Science and Technology, 68(5), 1201–1211.

    Article  Google Scholar 

  • Saarela, M., Kärkkäinen, T., Lahtonen, T., & Rossi, T. (2016). Expert-based versus citation-based ranking of scholarly and scientific publication channels. Journal of Informetrics, 10(3), 693–718.

    Article  Google Scholar 

  • Schimanski, L. A., & Alperin, J. P. (2018). The evaluation of scholarship in academic promotion and tenure processes: Past, present, and future. F1000Research. https://doi.org/10.12688/f1000research.16493.1.

    Google Scholar 

  • Serenko, A., & Bontis, N. (2011). What’s familiar is excellent: The impact of exposure effect on perceived journal quality. Journal of Informetrics, 5(1), 219–223.

    Article  Google Scholar 

  • Serenko, A., & Bontis, N. (2018). A critical evaluation of expert survey-based journal rankings: The role of personal research interests. Journal of the Association for Information Science and Technology, 69(5), 749–752.

    Article  Google Scholar 

  • Singleton, A. (2010). Why usage is useless. Learned Publishing, 23(3), 179–184.

    Article  Google Scholar 

  • Sugimoto, C. R., Work, S., Larivière, V., & Haustein, S. (2017). Scholarly use of social media and altmetrics: A review of the literature. Journal of the Association for Information Science and Technology, 68(9), 2037–2062.

    Article  Google Scholar 

  • Tahai, A., & Meyer, M. J. (1999). A revealed preference study of management journals’ direct influences. Strategic Management Journal, 20(3), 279–296.

    Article  Google Scholar 

  • Tu, C., & Worzala, E. (2010). The perceived quality of real estate journals: Does your affiliation matter? Property Management, 28(2), 104–121.

    Article  Google Scholar 

  • Walters, W. H. (2016a). Beyond use statistics: Recall, precision, and relevance in the assessment and management of academic libraries. Journal of Librarianship and Information Science, 48(4), 340–352.

    Article  Google Scholar 

  • Walters, W. H. (2016b). Evaluating online resources for college and university libraries: Assessing value and cost based on academic needs. Serials Review, 42(1), 10–17.

    Article  Google Scholar 

  • Walters, W. H. (2017a). Citation-based journal rankings: Key questions, metrics, and data sources. IEEE Access, 5, 22036–22053.

    Article  Google Scholar 

  • Walters, W. H. (2017b). Do subjective journal ratings represent whole journals or typical articles? Unweighted or weighted citation impact? Journal of Informetrics, 11(3), 730–744.

    Article  Google Scholar 

  • Walters, W. H. (2017c). Key questions in the development and use of survey-based journal rankings. Journal of Academic Librarianship, 43(4), 305–311.

    Article  Google Scholar 

  • Waltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365–391.

    Article  Google Scholar 

  • Wolff, C., Rod, A. B., & Schonfeld, R. C. (2016a). UK survey of academics 2015. New York: Ithaka S + R. https://doi.org/10.18665/sr.282736.

    Book  Google Scholar 

  • Wolff, C., Rod, A. B., & Schonfeld, R. C. (2016b). US faculty survey 2015. New York: Ithaka S + R. https://doi.org/10.18665/sr.277685.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William H. Walters.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Walters, W.H., Markgren, S. Do faculty journal selections correspond to objective indicators of citation impact? Results for 20 academic departments at Manhattan College. Scientometrics 118, 321–337 (2019). https://doi.org/10.1007/s11192-018-2972-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-018-2972-7

Keywords

Navigation