Skip to main content
Log in

The use of multiple indicators in the assessment of basic research

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

This paper argues that evaluations of basic research are best carried out using a range of indicators. After setting out the reasons why assessments of government-funded basic research are increasingly needed, we examine the multi-dimensional nature of basic research. This is followed by a conceptual analysis of what the different indicators of basic research actually measure. Having discussed the limitations of various indicators, we describe the method of converging partial indicators used in several SPRU evaluations. Yet although most of those who now use science indicators would agree that a combination of indicators is desirable, analysis of a sample ofScientometrics articles suggests that in practice many continue to use just one or two indicators. The paper also reports the results of a survey of academic researchers. They, too, are strongly in favour of research evaluations being based on multiple indicators combined with peer review. The paper ends with a discussion as to why multiple indicators are not used more frequently.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes and references

  1. E.g.J. H. Westbrook, Identifying significant research,Science, 132 (1960) 1229–1234;N. Wade, Citation analysis: a new tool for science administrators,Science, 188 (1975) 429–432.

    Google Scholar 

  2. J. Irvine, B. R. Martin, What direction for basic scientific research?, Chapter 5 inM. Gibbons, P. Gummett, B. M. Udgaonkar (eds.),Science and Technology Policy in the 1980s and Beyond, London, Longman, 1984, pp. 67–98.

    Google Scholar 

  3. B. R. Martin, J. Irvine, Assessing basic research: some partial indicators of scientific progress in radio astronomy,Research Policy, 12 (1983) 61–90.

    Article  Google Scholar 

  4. E.g.Anon, Is your lab well cited?,Nature, 227 (1970) 219;Anon, More games with numbers,Nature, 2228 (1970) 698–699.

    Google Scholar 

  5. E.g.D. Lindsey, Production and citation measures in the sociology of science: the problem of multiple authorship,Social Studies of Science, 10 (1980) 145–162.

    Google Scholar 

  6. The sixth, CERN, was left to a subsequent study two years later. The results were published in a series of three articles:B. R. Martin, J. Irvine, CERN: past peformance and future prospects-I-CERN's position in World High-Energy Physics,Research Policy, 13 (1984) 183–210;J. Irvine, B. R. Martin, CERN: past performance and future prospects-II-The scientific performance of the CERN accelerators,Research Policy, 13 (1984) 247–284; andB. R. Martin, J. Irvine, CERN: Past performance and future prospects-III-CERN and the future of world high-energy physics,Research Policy, 13 (1984) 311–342.

    Article  Google Scholar 

  7. J. Irvine, B. R. Martin, A methodology for assessing the scientific performance of research groups,Scientia Yugoslavia, 6 (1980) 83–95.

    Google Scholar 

  8. Martin, Irvine, ——op. cit., note 3. This article appeared in April 1983 even though it had been accepted for publication in September 1980.

    Article  Google Scholar 

  9. J. Irvine, B. R. Martin, P. A. Isard,Investing in the Future: An International Comparison of Government Funding of Academic and Related Research, Aldershot and Brookfield, Vermont, Edward Elgar, 1990.

    Google Scholar 

  10. Irvine, Martin, ——op. cit., note 2.

    Google Scholar 

  11. H. F. Hansen, B. H. Jørgensen,Science Policy & Research Management: Can Research Indicators Be Used?, Institute of Political Science, University of Copenhagen, Copenhagen, 1995, p. 1.

    Google Scholar 

  12. R. N. Kostoff,The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995.

  13. For example, the evaluation of government-funded applied research in Norway employed a combination of peer review and ‘customer review’-seeJ. Irvine, B. R. Martin, M. Schwarz, K. Pavitt, R. Rothwell,Government Support for Industrial Research in Norway: A SPRU Report, Oslo: Universitetsforlaget Norwegian Official Publication NOU 30B, 1981.

  14. See Fig. 1 on p. 64 inMartin, Irvine, ——op. cit., note 3.

    Article  Google Scholar 

  15. A good example here would be popular books by scientists such asStephen Hawking.

  16. Ibid., note 3, p. 64.

    Article  Google Scholar 

  17. Kostoff,op. cit., note 12R. N. Kostoff,The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995, p. 8.

  18. Martin, Irvine, ——op. cit., note 3, p. 75.

    Google Scholar 

  19. R. Miller, The influence of primary task on R&D laboratory evaluation: a comparative bibliometric analysis,R&D Management, 22 (1992) 3–20.

    Google Scholar 

  20. For an example of how the educational technological outputs from basic research may be assessed, see the references cited in note 21.J. Irvine, B. R. Martin, The Economic Effects of Big Science: The Case of Radio Astronomy,Proceedings of the International Colloquium on the Economic Effects of Space and Other Advanced Technologies, Strasbourg, 28–30 April 1980, Paris, European Space Agency, ESA SP-151, 1980;

  21. J. Irvine, B. R. Martin,The Economic Effects of Big Science: The Case of Radio Astronomy,Proceedings of the International Colloquium on the Economic Effects of Space and Other Advanced Technologies, Strasbourg, 28–30 April 1980 Paris, European Space Agency, ESA SP-151, 1980; andB. R. Martin, J. Irvine, Spin-Off from Basic Science: The Case of Radio Astronomy,Physics in Technology, 12 (1981) 204–212.

    Google Scholar 

  22. Over 150 scientists were interviewed in the ‘Big Science Project’.

  23. This section draws heavily onMartin, Irvine,op. cit., note 3 61–90.

    Article  Google Scholar 

  24. See, for example, the discussion in Ibid. p. 67.

    Article  Google Scholar 

  25. T. Luukkonen, The cognitive and social foundation of citation studies-why we still lack a theory of citation, submitted toScience, Technology and Human Values (1995).

  26. W. R. Shadish, D. Tolliver, M. Gray, S. K. Sen Gupta, Author judgements about works they cite: three studies from psychology journals,Social Studies of Science, 25 (1995) 447–498 — quote on p. 481.

    Google Scholar 

  27. See also the related distinction between ‘quality’ and ‘relevance’ inHansen, Jørgensen,op. cit., note 11, p. 3.

    Google Scholar 

  28. Martin, Irvine,op. cit., note 3, p. 70.

    Google Scholar 

  29. Ibid..

    Article  Google Scholar 

  30. Ibid..

    Article  Google Scholar 

  31. Ibid..

    Article  Google Scholar 

  32. Examples of this in the field of experimental high-energy physics can be found inMartin, Irvine,op. cit., note 6.

    Article  Google Scholar 

  33. T. S. Kuhn,The Structure of Scientific Revolutions, Chicago, University of Chicago Press, 1970.

    Google Scholar 

  34. Martin, Irvine,op. cit., note 3,idem.,op. cit., note 6 CERN: past performance and future prospects-I-CERN's position in World High-Energy Physics,Research Policy, 13 (1984) 183–210.

    Article  Google Scholar 

  35. Martin, Irvine,op. cit., note 3.

    Article  Google Scholar 

  36. J. Irvine, B. R. Martin, Assessing basic research: The case of the Isaac Newton Telescope,Social Studies of Science, 13 (1983) 49–86.

    Google Scholar 

  37. B. R. Martin, J. Irvine, Internal criteria for scientific choice: an evaluation of the research performance of electron high-energy physics accelerators,Minerva, XIX (1981) 408–432.

    Google Scholar 

  38. Martin, Irvine op. cit., note 6.

    Article  Google Scholar 

  39. L. M. Baird, C. Oppenheim, Do citations matter?,Journal of Information Science, 20 (1994) 2–15 (quote on p. 13).

    Google Scholar 

  40. Kostoff,op. cit., note 12The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995. p. 37.

  41. Ibid.R. N. Kostoff,The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995 p. 118

  42. A. H. Rubenstein, E. Geisler, Evaluating the outputs and impacts of R&D/innovation,International Journal of Technology Management, 6 (1991).

  43. Not all scientometric analysts are guilty of this. For example, the ISI analysts who periodically publish lists of leading research institutes inScience Watch normally use three indicators-papers, citations and citations per paper.

  44. Full details of the study and the results can be found inB. R. Martin, J. E. F. Skea,Academic Research Performance Indicators: An Assessment of the Possibilities, Brighton, SPRU, 1992.

    Google Scholar 

  45. See Table 12 inibid..

    Google Scholar 

  46. Kostoff,op cit, note 12R. N. Kostoff,The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995, p. 8.

  47. Hansen, Jørgensen,op. cit., note 11, p. 5.

    Google Scholar 

  48. Martin, Skea,op. cit., note 44,. p. 75.

    Google Scholar 

  49. Ibid. p. 75.

    Google Scholar 

  50. J. P. de Greve, A. Frijdal, Evaluation of scientific research prolife analysis — a mixed method,Higher Education Management, 1 (1989) 83–90.

    Google Scholar 

  51. Kostoff,op. cit., note 12The Handbook of Research Impact Assessment (Fifth Edition), DTIC Report Number ADA296021, 1995. p. 9.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Martin, B.R. The use of multiple indicators in the assessment of basic research. Scientometrics 36, 343–362 (1996). https://doi.org/10.1007/BF02129599

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02129599

Keywords

Navigation