Skip to main content
Log in

Relative Superiority Coefficient of papers: A new dimension for institutional research performance in different fields

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Cross-field comparison of citation measures of scientific achievement or research quality is severely hindered by the diversity of the stage of development and citation habits of different disciplines or fields. Based on the same principles of RCR (Relative Citation Rate) and RW (Relative Subfield Citedness), a new dimension — the Relative Superiority Coefficient (SC n ) in research quality was introduced. This can indicate clearly the relative research level for research groups at multiple levels in the respective field by consistent criteria in terms of research quality. Comparison of the SC n within or across 22 broad fields among 5 countries were presented as an application model. Hierarchical Cluster and One-Way ANOVA were applied and processed by the statistical program SPSS. All original data were from Essential Science Indicators (ESI) 1996–2006.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. E. Garfield, Citation indexes to sciences: a new dimension in documentation through association of ideas. Science, 122 (1955) 108–111.

    Article  Google Scholar 

  2. F. Hechta, B. K. Hechta, A. A. Sandberg, The Journal “Impact Factor”: A misnamed, misleading, misused measure. Cancer Genetics and Cytogenetics, 104(2) (1998) 77–81.

    Article  Google Scholar 

  3. S. X. Fang, Utilitarianism Induced by Using SCI as Solely Criterion. http://www.sciencetimes.com.cn/20010819/A6-xw006.htm (≪Science Time≫).

  4. E. Garfield, How to use citation analysis and faculty evaluations, and when is it relevant? Part 1. Current Comments, 6(44) (1983) 354–362.

    Google Scholar 

  5. E. Garfield, Citation data is subtle stuff. A primer on evaluating a scientist’s performance. The Scientist, 1(10) (1987) 9.

    Google Scholar 

  6. E. Garfield, The uses and limitations of citation data as science indicators: an overview for students and nonspecialists. Current Comments, 15(49) (1992) 188–198.

    Google Scholar 

  7. E. Garfield, Journal impact factor: a brief review. Canadian Medical Association Journal, 161(8) (1999) 979–980.

    Google Scholar 

  8. Xiaojun Hu, A Probe on the Scientific Indicators of Documents Citation under Web. Studies in Science of Science, 21(6) (2003) 647–651.

    Google Scholar 

  9. Xiaojun Hu, Jianhong Luo, Biomedical Informatics. Beijing, Press of Times Economic China. 2005.

    Google Scholar 

  10. Xiaojun Hu, A study on the application of Garfield’s Law in faculty evaluations. Library and Information Service, (8) (2003) 26–29.

  11. R. West, Impact factors need to be improved. British Medical Journal, 313 (1996) 1400.

    Google Scholar 

  12. A. Schubert, T. Braun, Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics, 9(5–6) (1986) 281–291.

    Article  Google Scholar 

  13. P. Vinkler, Evaluation of some methods for the relative assessment of scientific publications. Scientometrics, 10(3–4) (1986) 157–177.

    Article  Google Scholar 

  14. P. Vinkler, Relations of relative scientometric impact indicators. The relative publications strategy index. Scientometrics, 40(1) (1997) 163–169.

    Article  Google Scholar 

  15. P. Vinkler, Characterization of the impact of sets of scientific papers: the Garfield (impact) factor. Journal of the American Society for Information Science and Technology, 55(5) (2004) 431–435.

    Article  Google Scholar 

  16. T. Hayashi, Y. Fujigaki, Differences in knowledge production between disciplines based on analysis of paper styles and citation patterns. Scientometrics, 46(1) (1999) 73–86.

    Google Scholar 

  17. Chongde Wang, Introduction of Document Metrology. Guangxi, Press of Guangxi Normal College. 1997.

    Google Scholar 

  18. Essential Science Indicators, http://portal.isiknowledge.com/portal.cgi?DestApp=ESI&Func=Frame

  19. E. Garfield, A. Welljams-Dorof, Citation data: their use as quantitative indicatiors for science and technology evaluation and policy-making. Science and Public Policy, 19(5) (1992) 321–327.

    Google Scholar 

  20. P. Korhonen, R. Tainio, J. Wallenius, Value efficiency analysis of academic research. European Journal of Operational Research, 130(1) (2001) 121–132.

    Article  MATH  Google Scholar 

  21. H. F. Moed, Measuring China’s research performance using the Science Citation Index Scientometrics, 53(3) (2002) 281–296.

    Article  Google Scholar 

  22. H. Van den Berghe, J. A. Houben, R. E. de Bruin, et al., Bibliometric indicators of university research performance in Flanders. Journal of the American Society for Information Science, 49(1) (1998) 59–67.

    Article  Google Scholar 

  23. D. Swinbanks, R. Nathan, R. Triendl, Western research assessment meets Asian cultures. Nature, 389(6647) (1997) 113–117.

    Article  Google Scholar 

  24. M. Bordons, M. T. Fernandez, I. Gomez, Advantages and limitations in the use of impact factor measures for the assessment of research performance in a peripheral country. Scientometrics, 53(2) (2002) 195–206.

    Article  Google Scholar 

  25. G. Williams, Misleading, unscientific, and unjust: the United Kingdom’s research assessment exercise. British Medical Journal, 316(7137) (1998) 1079–1082.

    Google Scholar 

  26. P. R. Thomas, D. S. Watkins, Institutional research rankings via bibliometric analysis and direct peer review: A comparative case study with policy implications. Scientometrics, 41(3) (1998) 335–355.

    Article  Google Scholar 

  27. J. Adams, Research assessment in the UK. Science, 296(5569) (2002) 805.

    Article  Google Scholar 

  28. I. Stewart, Reassessing research assessment in the UK. Science, 296(5574) (2002) 1802.

    Article  Google Scholar 

  29. K. Jamrozik, D. P. Weller, R. F. Heller, Research assessment: there must be an easier way. Medical Journal of Australia, 180(11) (2004) 553–554.

    Google Scholar 

  30. E. Garfield, Evaluating research: Do bibliometric indicators provide the best measures? Current Contents, 12(14) (1989) 93.

    Google Scholar 

  31. T. N. van Leeuwen, H. F. Moed, R. J. W. Tijssen, M. S. Visser, A. F. J. van Raan, Language biases in the coverage of the Science Citation Index and its consequences for international comparisons of national research performance. Scientometrics, 51(1) (2001) 335–346.

    Article  Google Scholar 

  32. T. Luukkonen, Bibliometrics and evaluation of research performance. Annals of Medicine, 22(3) (1990) 145–150.

    Google Scholar 

  33. P. O. Seglen, Why the Impact factor of journals should not be used for evaluating research. British Medical Journal, 314(7079) (1997) 498–502.

    Google Scholar 

  34. A. Rostami-Hodjegan, G. T. Tucker, Journal impact factors: a ‘bioequivalence’ issue? British Journal of Clinical Pharmacology, 51(2) (2001) 111–117.

    Article  Google Scholar 

  35. G. H. Whitehouse, Impact factors: facts and myths. European Radiology, 12(4) (2002) 715–717.

    Article  Google Scholar 

  36. E. Garfield, Fortnightly review: how can impact factors be improved. British Medical Journal, 313(7054) (1996) 411–413.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaojun Hu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hu, X. Relative Superiority Coefficient of papers: A new dimension for institutional research performance in different fields. Scientometrics 72, 389–402 (2007). https://doi.org/10.1007/s11192-006-1733-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-006-1733-1

Keywords

Navigation