Skip to main content
Log in

Evaluating scientists by citation and disruption of their representative works

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

A well-designed method for evaluating scientists is vital for the scientific community. It can be used to rank scientists in various practical tasks, such as hiring, funding application and promotion. However, a large number of evaluation methods are designed based on citation counts which can merely evaluate scientists’ scientific impact but can not evaluate their innovation ability which actually is a crucial characteristic for scientists. In addition, when evaluating scientists, it has become increasingly common to only focus on their representative works rather than all their papers. Accordingly, we here propose a hybrid method by combining scientific impact with innovation under representative works framework to evaluate scientists. Our results are validated on the American Physical Society journals dataset and the prestigious laureates datasets. The results suggest that the correlation between citation and disruption is weak, which enables us to incorporate them. In addition, the analysis shows that using representative works framework to evaluate scientists is advantageous and our hybrid method can effectively identify the Nobel Prize laureates and several other prestigious prizes laureates with higher precision and better mean ranking. The evaluation performance of the hybrid method is shown to be the best compared with the mainstream methods. This study provides policy makers an effective way to evaluate scientists from more comprehensive dimensions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data availability

The APS data can be downloaded at https://journals.aps.org/datasets. The Nobel Prize laureates data in Physics is available at https://www.nobelprize.org/prizes/physics/.

Code availability

The code for data analysis in this paper is available from the corresponding author on reasonable request.

References

  • Ain, Qu., Riaz, H., & Afzal, M. T. (2019). Evaluation of h-index and its citation intensity based variants in the field of mathematics. Scientometrics, 119(1), 187–211.

    Article  Google Scholar 

  • Ball, P., et al. (2005). Index aims for fair ranking of scientists. Nature, 436(7053), 900.

    Article  Google Scholar 

  • Bao, P., & Wang, J. (2018). Identifying your representative work based on credit allocation. Companion Proceedings of the Web Conference, 2018, 5–6.

    Google Scholar 

  • Batista, P. D., Campiteli, M. G., & Kinouchi, O. (2006). Is it possible to compare researchers with different scientific interests? Scientometrics, 68(1), 179–189.

    Article  Google Scholar 

  • Bornmann, L., & Daniel, H. D. (2005). Does the h-index for ranking of scientists really work? Scientometrics, 65(3), 391–392.

    Article  Google Scholar 

  • Bornmann, L., & Tekles, A. (2019). Disruptive papers published in scientometrics. Scientometrics, 120(1), 331–336.

    Article  Google Scholar 

  • Bornmann, L., Devarakonda, S., Tekles, A., & Chacko, G. (2020). Are disruption index indicators convergently valid? The comparison of several indicator variants with assessments by peers. Quantitative Science Studies, 1(3), 1242–1259.

    Article  Google Scholar 

  • Bornmann, L., Devarakonda, S., Tekles, A., & Chacko, G. (2020). Disruptive papers published in scientometrics: Meaningful results by using an improved variant of the disruption index originally proposed by wu, wang, and evans (2019). Scientometrics, 123(2), 1149–1155.

    Article  Google Scholar 

  • Brin, S., & Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 30(1–7), 107–117.

    Article  Google Scholar 

  • Chen, P., Xie, H., Maslov, S., & Redner, S. (2007). Finding scientific gems with google’s pagerank algorithm. Journal of Informetrics, 1(1), 8–15.

    Article  Google Scholar 

  • Cronin, B., & Meho, L. (2006). Using the h-index to rank influential information scientistss. Journal of the American Society for Information Science and Technology, 57(9), 1275–1278.

    Article  Google Scholar 

  • Ding, Y. (2011). Applying weighted pagerank to author citation networks. Journal of the American Society for Information Science and Technology, 62(2), 236–245.

    Article  MathSciNet  Google Scholar 

  • Dorogovtsev, S. N., & Mendes, J. F. (2015). Ranking scientists. Nature Physics, 11(11), 882–883.

    Article  Google Scholar 

  • Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131–152.

    Article  Google Scholar 

  • Fiala, D., Šubelj, L., Žitnik, S., & Bajec, M. (2015). Do pagerank-based author rankings outperform simple citation counts? Journal of Informetrics, 9(2), 334–348.

    Article  Google Scholar 

  • Fortunato, S., Bergstrom, C. T., Börner, K., Evans, J. A., Helbing, D., Milojević, S., Petersen, A. M., Radicchi, F., Sinatra, R., Uzzi, B., et al. (2018). Science of science. Science, 359(6379), eaao0185.

    Article  Google Scholar 

  • Funk, R. J., & Owen-Smith, J. (2017). A dynamic network measure of technological change. Management Science, 63(3), 791–817.

    Article  Google Scholar 

  • Garfield, E., et al. (1970). Citation indexing for studying science. Nature, 227(5259), 669–671.

    Article  Google Scholar 

  • Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National academy of Sciences, 102(46), 16569–16572.

    Article  MATH  Google Scholar 

  • Hirsch, J. E. (2007). Does the h index have predictive power? Proceedings of the National Academy of Sciences, 104(49), 19193–19198.

    Article  Google Scholar 

  • Hirsch, J. E. (2019). h\(\alpha\): An index to quantify an individual’s scientific leadership. Scientometrics, 118(2), 673–686.

    Article  Google Scholar 

  • Ioannidis, J., Boyack, K. W., Small, H., Sorensen, A. A., & Klavans, R. (2014). Bibliometrics: Is your most cited work your best? Nature, 514(7524), 561–562.

    Article  Google Scholar 

  • Jin, B., Liang, L., Rousseau, R., & Egghe, L. (2007). The r-and ar-indices: Complementing the h-index. Chinese Science Bulletin, 52(6), 855–863.

    Article  Google Scholar 

  • Kaur, J., Radicchi, F., & Menczer, F. (2013). Universality of scholarly impact metrics. Journal of Informetrics, 7(4), 924–932.

    Article  Google Scholar 

  • Kosmulski, M., et al. (2006). A new hirsch-type index saves time and works equally well as the original h-index. ISSI Newsletter, 2(3), 4–6.

    Google Scholar 

  • Lehmann, S., Jackson, A. D., & Lautrup, B. E. (2006). Measures for measures. Nature, 444(7122), 1003–1004.

    Article  Google Scholar 

  • Li, S., Shen, H., Bao, P., & Cheng, X. (2021). \(h_u\) hu-index: A unified index to quantify individuals across disciplines. Scientometrics, 126(4), 3209–3226.

    Article  Google Scholar 

  • Liu, X., Bollen, J., Nelson, M. L., & Van de Sompel, H. (2005). Co-authorship networks in the digital library research community. Information Processing & Management, 41(6), 1462–1480.

    Article  Google Scholar 

  • Lü, L., Medo, M., Yeung, C. H., Zhang, Y. C., Zhang, Z. K., & Zhou, T. (2012). Recommender systems. Physics Reports, 519(1), 1–49.

    Article  Google Scholar 

  • Ma, N., Guan, J., & Zhao, Y. (2008). Bringing pagerank to the citation analysis. Information Processing & Management, 44(2), 800–810.

    Article  Google Scholar 

  • Mariani, M. S., Medo, M., & Zhang, Y. C. (2016). Identification of milestone papers through time-balanced network centrality. Journal of Informetrics, 10(4), 1207–1223.

    Article  Google Scholar 

  • Niu, Q., Zhou, J., Zeng, A., Fan, Y., & Di, Z. (2016). Which publication is your representative work? Journal of Informetrics, 10(3), 842–853.

    Article  Google Scholar 

  • Nykl, M., Ježek, K., Fiala, D., & Dostal, M. (2014). Pagerank variants in the evaluation of citation networks. Journal of Informetrics, 8(3), 683–692.

    Article  Google Scholar 

  • Oppenheim, C. (2007). Using the h-index to rank influential British researchers in information science and librarianship. Journal of the American Society for Information Science and Technology, 58(2), 297–301.

    Article  Google Scholar 

  • Radicchi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: Toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences, 105(45), 17268–17272.

    Article  Google Scholar 

  • Radicchi, F., Fortunato, S., Markines, B., & Vespignani, A. (2009). Diffusion of scientific credits and the ranking of scientists. Physical Review E, 80(5), 056103.

    Article  Google Scholar 

  • Redner, S. (1998). How popular is your paper? An empirical study of the citation distribution. The European Physical Journal B-Condensed Matter and Complex Systems, 4(2), 131–134.

    Article  Google Scholar 

  • Ruan, X., Lyu, D., Gong, K., Cheng, Y., & Li, J. (2021). Rethinking the disruption index as a measure of scientific and technological advances. Technological Forecasting and Social Change, 172, 121071.

    Article  Google Scholar 

  • Shen, H. W., & Barabási, A. L. (2014). Collective credit allocation in science. Proceedings of the National Academy of Sciences, 111(34), 12325–12330.

    Article  Google Scholar 

  • Shibayama, S., & Wang, J. (2020). Measuring originality in science. Scientometrics, 122(1), 409–427.

    Article  Google Scholar 

  • Sinatra, R., Wang, D., Deville, P., Song, C., & Barabasi, A. L. (2016). Quantifying the evolution of individual scientific impact. Science, 354(6312), aaf5239.

    Article  Google Scholar 

  • Wang, D., Song, C., & Barabási, A. L. (2013). Quantifying long-term scientific impact. Science, 342(6154), 127–132.

    Article  Google Scholar 

  • Wang, H., Shen, H. W., & Cheng, X. Q. (2016). Scientific credit diffusion: Researcher level or paper level? Scientometrics, 109(2), 827–837.

    Article  Google Scholar 

  • Wang, J. P., Guo, Q., Zhou, L., & Liu, J. G. (2019). Dynamic credit allocation for researchers. Physica A: Statistical Mechanics and Its Applications, 520, 208–216.

    Article  Google Scholar 

  • Wu, L., Wang, D., & Evans, J. A. (2019). Large teams develop and small teams disrupt science and technology. Nature, 566(7744), 378–382.

    Article  Google Scholar 

  • Xu, S., Mariani, M. S., Lü, L., & Medo, M. (2020). Unbiased evaluation of ranking metrics reveals consistent performance in science and technology citation data. Journal of Informetrics, 14(1), 101005.

    Article  Google Scholar 

  • Yan, E., & Ding, Y. (2011). Discovering author impact: A pagerank perspective. Information Processing & Management, 47(1), 125–134.

    Article  Google Scholar 

  • Zeng, A., Shen, Z., Zhou, J., Wu, J., Fan, Y., Wang, Y., & Stanley, H. E. (2017). The science of science: From the perspective of complex systems. Physics Reports, 714, 1–73.

    Article  MathSciNet  MATH  Google Scholar 

  • Zeng, A., Fan, Y., Di, Z., Wang, Y., & Havlin, S. (2021). Fresh teams are associated with original and multidisciplinary research. Nature Human Behaviour, 5(10), 1314–1322.

    Article  Google Scholar 

  • Zhang, F., & Wu, S. (2021). Measuring academic entities impact by content-based citation analysis in a heterogeneous academic network. Scientometrics, 126(8), 7197–7222.

    Article  Google Scholar 

  • Zhou, J., Zeng, A., Fan, Y., & Di, Z. (2018). Identifying important scholars via directed scientific collaboration networks. Scientometrics, 114(3), 1327–1343.

    Article  Google Scholar 

  • Zhou, J., Zeng, A., Fan, Y., & Di, Z. (2018). The representative works of scientists. Scientometrics, 117(3), 1721–1732.

    Article  Google Scholar 

  • Zhu, X., Turney, P., Lemire, D., & Vellino, A. (2015). Measuring academic influence: Not all citations are equal. Journal of the Association for Information Science and Technology, 66(2), 408–427.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 72274020) and the Swiss National Science Foundation (Grant No. 182498). Ruijie Wang acknowledges the support from the China Scholarship Council (CSC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to An Zeng.

Ethics declarations

Conflict of interest

The authors have no conflict of interest to declare.

Supplementary Information

Below is the link to the electronic supplementary material.

Electronic supplementary material 1 (PDF 221 kb)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, R., Zhou, Y. & Zeng, A. Evaluating scientists by citation and disruption of their representative works. Scientometrics 128, 1689–1710 (2023). https://doi.org/10.1007/s11192-023-04631-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-023-04631-7

Keywords

Navigation