Productivity, performance, efficiency, impact—What do we measure anyway?: Some comments on the paper “A farewell to the MNCS and like size-independent indicators” by Abramo and D’Angelo

https://doi.org/10.1016/j.joi.2016.04.008Get rights and content

Introduction

Abramo and D’Angelo (2016) publish a critical study on size-independent citation indicators in which they address two important issues, firstly the fallacy of interpreting (size-independent) citation indicators as measures of research performance, and secondly the inappropriateness of such indices for ranking exercises. Taking a strict and focused perspective on the indicators only, both statements are as such correct. But Abramo and D’Angelo are not the first to articulate these concerns. However, the authors’ argumentation, which is based on a simple economic interpretation of research productivity and efficiency, results in their suggestion to eliminate so-called size-independent citation measures and to replace those by indicators that take also input measures, in particular, measures of research investment into account. Finally, they propagate a paradigm shift towards “true measurement of research efficiency” and formulate several recommendations addressed to various actors and stakeholders for the use of performance measures in this context. This endeavour is certainly guided by the best intentions, however in the course of the argumentation and the development of their new strategy, the authors fall into the same trap that they basically intended to dismantle. In what follows, we will point to some problematic issues we have found in their study. First, we discuss the arguments put forward by the authors, next the examples they provided are examined and last, we comment their recommendations.

Section snippets

Discussion

As already mentioned in the introduction, we basically agree that citation measures, in general, and normalised or size-independent citation indicators, in particular, cannot by themselves reflect the complexity of research performance. Nonetheless, publication and citation measures do – if properly applied – measure important aspects and dimensions of research productivity and impact in a statistically reliable manner, notably if these are based on appropriate mathematical models and

Conclusions

To conclude, maybe time is ripe indeed for a paradigm shift and bibliometrics needs to be opened to broader scopes and contexts; application to social sciences, arts and humanities, the web, the search for new and alternative metrics and the measurement of societal impact of research are certainly signs of an upcoming new age in our field too. And this new age requires also new models and tools. But replacing one imperfect tool by another one, even if guided by the best intentions, does not

References (4)

There are more references available in the full text version of this article.

Cited by (25)

  • Comparison of research performance of Italian and Norwegian professors and universities

    2020, Journal of Informetrics
    Citation Excerpt :

    Waltman et al. (2016) while agreeing that productivity is a key element of research performance”, warn that “scientific performance is a multidimensional concept and therefore different indicators are required for quantifying different dimensions.” Along the same line, Glänzel, Thijs, and Debackere (2016) recall the multidimensionality of ranking, Bornmann and Haunschild (2016) report that “the SCImago Institutions Ranking considers manifold sets of indicators”, and Sivertsen (2016) while disagreeing with the call for abandoning alternative indicators, welcomes the introduction of the FSS as an important step towards advancing the measurement of research productivity. For the large part, these are difficulties and apprehensions that Abramo and D’Angelo too can share.

  • Technological research in the EU is less efficient than the world average. EU research policy risks Europeans' future

    2018, Journal of Informetrics
    Citation Excerpt :

    However, this association implies neither a link with productivity—as in “Productivity is the quintessential indicator of efficiency in any production system” (Abramo & D’Angelo, 2014, p. 1129)—nor any input-output relationship. This relationship has been extensively discussed elsewhere (Abramo & D’Angelo, 2016; Glanzel, Thijs, & Debackere, 2016; Ruiz-Castillo, 2016; Waltman, van-Eck, Visser, & Wouters, 2016) but here we use efficiency in the sense of intrinsic efficiency or breakthrough potential, i.e., independent of inputs. This definition could be seen as the capacity of a research system to produce revolutionary science with the minimum possible amount of normal science, using Kuhn’s terms (Kuhn, 1970).

  • Research assessment by percentile-based double rank analysis

    2018, Journal of Informetrics
    Citation Excerpt :

    In this case, the assessment can be analyzed in two contexts: the achievement of discoveries and scientific advancements, and the economic benefits as in applied research. However, the latter neither can be easily established nor is the only target of basic research (Bornmann, 2012; Salter & Martin, 2001); even the method that should be applied to this economic analysis is under debate (Abramo and D’Angelo, 2014, 2016; Bornmann & Haunschild, 2016b; Glanzel, Thijs, & Debackere, 2016). Therefore, it seems that the best evaluation of basic research must be done by attending to its scientific achievements.

  • On evaluating the quality of a computer science/computer engineering conference

    2017, Journal of Informetrics
    Citation Excerpt :

    For example, Glänzel and Debackere (2009) have discussed the “multi-dimensionality” of ranking and the role of bibliometrics in university assessment and concluded that “the difficulty of quantification as well as the all too frequently experienced arbitrariness in defining composite indicators often results in inadequate representation and irreproducibility.” This implies that both essential information and the demand for adequate depiction of complexity is lost, whenever variables representing different dimensions are mathematically combined in a non-decomposable way (Glänzel, Thijs, & Debackere, 2016); essentially, a composite metric is a “black box”, much more than each of the individual metrics, making it at times harder to understand and interpret the outcomes they produce. On the other hand, many researchers are in favor of composite indicators.

View all citing articles on Scopus
View full text