Skip to main content

Advertisement

Log in

Mapping the evaluation results between quantitative metrics and meta-synthesis from experts’ judgements: evidence from the Supply Chain Management and Logistics journals ranking

  • Focus
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Meta-syntheses from experts’ judgements and quantitative metrics are two main forms of evaluation. But they both have limitations. This paper constructs a framework for mapping the evaluation results between quantitative metrics and experts’ judgements such that they may be solved. In this way, the weights of metrics in quantitative evaluation are objectively obtained, and the validity of the results can be testified. Weighted average percentile (WAP) is employed to aggregate different experts’ judgements into standard WAP scores. The Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method is used to map quantitative results into experts’ judgements, while WAP scores are equal to the final closeness coefficients generated by the TOPSIS method. However, the closeness coefficients of TOPSIS rely on the weights of quantitative metrics. In this way, the mapping procedure is transformed into an optimization problem, and a genetic algorithm is introduced to search for the best weights. An academic journal ranking in the field of Supply Chain Management and Logistics (SCML) is used to test the validity obtained by mapping results. Four prominent ranking lists from Association of Business Schools, Australian Business Deans Council, German Academic Association for Business Research, and Comité National de la Recherche Scientifique were selected to represent different experts’ judgements. Twelve indices including IF, Eigenfactor Score (ES), H-index, Scimago Journal Ranking, and Source Normalized Impact per Paper (SNIP) were chosen for quantitative evaluation. The results reveal that the mapping results possess high validity for the relative error of experts’ judgements, the quantitative metrics are 43.4%, and the corresponding best weights are determined in the meantime. Thus, some interesting findings are concluded. First, H-index, Impact Per Publication (IPP), and SNIP play dominant roles in the SCML journal’s quality evaluation. Second, all the metrics are positively correlated, although the correlation varies among metrics. For example, ES and NE are perfectly, positively correlated with each other, yet they have the lowest correlation with the other metrics. Metrics such as IF, IFWJ, 5-year IF, and IPP are highly correlated. Third, some highly correlated metrics may perform differently in quality evaluation, such as IPP and 5-year IF. Therefore, when mapping the quantitative metrics and experts’ judgements, academic fields should be treated distinctively.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. https://georgewbush-whitehouse.archives.gov/omb/performance/.

  2. http://wokinfo.com/essays/journal-selection-process/.

  3. http://www.elsevier.com/online-tools/scopus/content-overview.

  4. https://www.scopus.com/sources.

  5. Scopus (2018). Journal metrics. SCImago Retrieved in Sep. 2018. http://www.journalindicators.com/indicators.

References

  • Atta S, Mahapatra PRS, Mukhopadhyay A (2017) Solving maximal covering location problem using genetic algorithm with local refinement. Soft Comput 22(12):3891–3906

    Article  Google Scholar 

  • Bao C, Wu D, Li J (2019) A knowledge-based risk measure from the fuzzy multi-criteria decision-making perspective. IEEE Trans Fuzzy Syst. https://doi.org/10.1109/TFUZZ.2018.2838064

    Article  Google Scholar 

  • Behzadian M, Otaghsara SK, Yazdani M, Ignatius J (2012) A state-of the-art survey of TOPSIS applications. Expert Syst Appl 39(17):13051–13069

    Article  Google Scholar 

  • Bergstrom C (2007) Eigenfactor: measuring the value and prestige of scholarly journals. Coll Res Libr 68(5):314–316

    Google Scholar 

  • Chen SJ, Hwang CL (1992) Fuzzy multiple attribute decision making: methods and applications. Springer, Berlin

    Book  Google Scholar 

  • Ennas G, Biggio B, Di Guardo M (2015) Data-driven journal meta-ranking in business and management. Scientometrics 105(3):1911–1929

    Article  Google Scholar 

  • Franceschet M (2010) A comparison of bibliometric indicators for computer science scholars and journals on web of science and google scholar. Scientometrics 83(1):243–258

    Article  Google Scholar 

  • Fahrenkrog G, Polt W, Rojo J, Tübke A, Zinöcker K (2002) RTD evaluation tool box: assessing the socio-economic impact of RTD policies. IPTS Technical Report Series, Seville

    Book  Google Scholar 

  • Falagas ME, Kouranos VD, Arencibia-Jorge R, Karageorgopoulos DE (2008) Comparison of scimago journal rank indicator with journal impact factor. FASEB J 22(8):2623–2628

    Article  Google Scholar 

  • Garfield E, Sher IH (1963) New factors in the evaluation of scientific literature through citation indexing. Am Documentation 14(3):195–201

    Article  Google Scholar 

  • Georgi C, Darkow IL, Kotzab H (2013) Foundations of logistics and supply chain research: a bibliometric analysis of four international journals. Int J Logist Res Appl 16(6):522–533

    Article  Google Scholar 

  • Goldberg DE (2006) Genetic algorithms. Pearson Education India, Delhi

    Google Scholar 

  • González-Pereira B, Guerrero-Bote VP, Moya-Anegón F (2010) A new approach to the metric of journals’ scientific prestige: the SJR indicator. J Informet 4(3):379–391

    Article  Google Scholar 

  • Grimm C, Knemeyer M, Polyviou M, Ren X (2015) Supply chain management research in management journals. Int J Phys Distrib Logist Manag 45(5):404–458

    Article  Google Scholar 

  • Hanna JB, Whiteing AE, Menachof DA, Gibson BJ (2009) An analysis of the value of supply chain management periodicals. Int J Phys Distrib Logist Manag 39(2):145–165

    Article  Google Scholar 

  • Hirsch J (2005) An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA 102(46):16569–16572

    Article  Google Scholar 

  • Hodge DR, Lacasse JR (2011) Evaluating journal quality: is the h-index a better measure than impact factors? Res Soc Work Pract 21(2):222–230

    Article  Google Scholar 

  • Hwang C, Yoon K (1981) Multiple attribute decision making: methods and applications. Springer, New York

    Book  Google Scholar 

  • Kinnear KE (1994) Advances in genetic programming: 1. MIT Press, Cambridge

    Google Scholar 

  • Li J, Wu D, Li J, Li M (2017) A comparison of 17 article-level bibliometric indicators of institutional research productivity: evidence from the information management literature of China. Inf Process Manag 53(5):1156–1170

    Article  Google Scholar 

  • Li J, Yao X, Sun X, Wu D (2018) Determining the fuzzy measures in multiple criteria decision aiding from the tolerance perspective. Eur J Oper Res 264(2):428–439

    Article  MathSciNet  Google Scholar 

  • Mahmood K (2017) Correlation between perception-based journal rankings and the journal impact factor (JIF): a systematic review and meta-analysis. Ser Rev 43(2):120–129

    Article  Google Scholar 

  • Mckinnon AC (2017) Starry-eyed ii: the logistics journal ranking debate revisited. Int J Phys Distrib Logist Manag 47(6):431–446

    Article  Google Scholar 

  • Mingers J, Leydesdorff L (2015) A review of theory and practice in scientometrics. Eur J Oper Res 246(1):1–19

    Article  Google Scholar 

  • Mingers J, Yang LY (2017) Evaluating journal quality: a review of journal citation indicators, and ranking in business and management. Eur J Oper Res 257(1):323–337

    Article  MathSciNet  Google Scholar 

  • Moed HF (2015) Comprehensive indicator comparisons intelligible to non-experts: the case of two SNIP versions. Scientometrics 106(1):1–15

    Google Scholar 

  • Murthy YVS, Koolagudi SG (2018) Classification of vocal and non-vocal segments in audio clips using genetic algorithm based feature selection (GAFS). Expert Syst Appl 106:77–91

    Article  Google Scholar 

  • Peters K, Daniels K, Hodgkinson GP, Haslam SA (2014) Experts’ judgments of management journal quality: an identity concerns model. J Manag 40(7):1785–1812

    Google Scholar 

  • Quarshie AM, Salmi A, Leuschner R (2016) Sustainability and corporate social responsibility in supply chains: the state of research in supply chain management and business ethics journals. J Purch Supply Manag 22(2):82–97

    Article  Google Scholar 

  • Rosenthal EC, Weiss HJ (2017) A data envelopment analysis approach for ranking journals. Omega-Int J Manag Sci 70:135–147

    Article  Google Scholar 

  • Seglen PO (1992) The skewness of science. J Am Soc Inf Sci 43(9):628–638

    Article  Google Scholar 

  • Straub D, Anderson C (2010) Journal quality and citations: common metrics and considerations about their use. MIS Q 34(1):iii–xii

    Article  Google Scholar 

  • Templeton GF, Lewis BR (2015) Fairness in the institutional valuation of business journals. MIS Q 39(3):523–539

    Article  Google Scholar 

  • Upeksha P, Manjula W (2018) Relationship between journal-ranking metrics for a multidisciplinary set of journals. Portal Libr Acad 18(1):35–58

    Article  Google Scholar 

  • Watson K, Montabon F (2014) A ranking of supply chain management journals based on departmental lists. Int J Prod Res 52(14):4364–4377

    Article  Google Scholar 

  • West J, Bergstrom T, Bergstrom CT (2010) Big macs and eigenfactor scores: don’t let correlation coefficients fool you. J Assoc Inf Sci Technol 61(9):1800–1807

    Article  Google Scholar 

  • Wu D, Li M, Zhu X, Song H, Li J (2015) Ranking the research productivity of business and management institutions in Asia-Pacific region: empirical research in leading ABS journals. Scientometrics 105(2):1253–1272

    Article  Google Scholar 

  • Wu D, Li J, Lu X, Li J (2018) Journal editorship index for assessing the scholarly impact of academic institutions: an empirical analysis in the field of economics. J Informetrics 12(2):448–460

    Article  Google Scholar 

  • Yoon KP, Hwang CL (1995) Multiple attribute decision making. Sage Publication, Thousand Oaks, CA

    Book  Google Scholar 

  • Yu DJ, Wang WR, Zhang S, Zhang WY, Liu RY (2017) A multiple-link, mutually reinforced journal-ranking model to measure the prestige of journals. Scientometrics 111(1):521–642

    Article  Google Scholar 

  • Zhu X, Li J, Wu D, Wang H, Liang C (2013) Balancing accuracy, complexity and interpretability in consumer credit decision making: a c-topsis classification approach. Knowl-Based Syst 52(6):258–267

    Article  Google Scholar 

Download references

Acknowledgements

This research has been supported by grants from the National Natural Science Foundation of China (71874180, 71425002, 71840024), the Key Research Program of Frontier Sciences of the Chinese Academy of Sciences (QYZDB-SSW-SYS036).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dengsheng Wu.

Ethics declarations

Conflicts of interest

Neither the entire manuscript nor any part of its content has been published or has been accepted elsewhere. It has not been submitted to any other journal. The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

Additional information

Communicated by X. Li.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Part 1 Introduction to 12 quantitative metrics

In this paper, 12 journal quality metrics are selected for quantitative evaluation. They are: the Impact Factor (IF), the Impact Factor without Journal Self Cites (IFWJ), the Eigenfactor Score (ES), the Article Influence Score (AIS), Normalized Eigenfactor (NE), the Five-year impact factor (5-year IF), the Immediacy Index (Imm), H-index, CiteScore, Scimago Journal Ranking (SJR), the Source Normalized Impact per Paper (SNIP), and the Impact Per Publication (IPP). The detailed information of them is as follows.

1.2 Impact Factor (IF)

IF, originally proposed by Garfield and Sher (1963), is published annually in Journal Citation Reports by Clarivate Anayltics. It is used to assess the relative significance of a journal and the 2-year time window IF is defined to be the average number of citations per article published in the past 2 years in a journal. It was the first and is the most well-known journal metric (Mingers and Yang 2017). However, it is not without criticisms for its limitations. It includes that a journal’s impact factor will be increased through self-citation, the values in different research fields rely on the citation behavior, and the results are biased because they are based on a small set of journals (Ennas et al. 2015; Upeksha and Manjula 2018). Despite all the above limitations, IF is still playing a dominant role in journal evaluation. The reasons come from two aspects. First, the calculation is intrinsically easy to understand. Second, it is a path dependence for its introduction researches and editors have learnt to use it and strong alternatives are hard to find that makes it the standard metric (Ennas et al. 2015).

1.3 Impact Factor without Journal Self Cites (IFWJ)

IFWJ is the modified IF metric which only counts the number of citations without considering the self-citations.

1.4 Five-year impact factor (5-year IF)

5-year IF is the average number of citations per paper published in a particular journal in the previous 5 years, for the journal in a given year. It is similar to IF.

1.5 Immediacy Index (Imm)

Imm is defined to be the average number of citations received by an article in a journal during its first year of publication, assessing the potential and immediate impact of a paper (Ennas et al. 2015). In this way, papers published in the early of the year have a higher possibility of larger values than the papers published later of the year (Ennas et al. 2015). Therefore, the high frequently issued journals will enjoy great advantages than those lower ones.

1.6 Eigenfactor Score (ES)

ES is a prestige metric that used to assess the overall importance of a journal (Bergstrom 2007). The calculation is based on the PageRank algorithm that used by Google search engine for ranking websites (Rosenthal and Weiss 2017). It is defined to be the percentage of time that library users spend with that journal. The value of ES relies on the number of citations received by the journal in the given year for the articles published during the past 5 years (Upeksha and Manjula 2018). This metric only counts the number of citations without considering the self-citations (Mingers and Yang 2017). Five years of papers are considered for making a fully use of the structure of the entire citation network (West et al. 2010; Franceschet 2010; Hodge and Lacasse 2011; Mingers and Leydesdorff 2015). What’s more, the metric is adjusted for citation differences across multiple research fields (Upeksha and Manjula 2018). However, ES prone to be influenced by the total number of papers published in a journal because it is based on the total number of citations (Mingers and Yang 2017).

1.7 Article Influence Score (AIS)

AIS is the ratio of ES to the proportion of papers published in a particular journal over 5 years (Mingers and Yang 2017). AIS is a normalized metric to make the value 1.00 indicate that the journal is the average level of influence in the entire JCR database (Mingers and Leydesdorff 2015) and it is related to ES and similar to a 5-year JIF. NE, aiming to provide greater clarity among these metrics, presents multiple influences on other journals in the same research field (Yu et al. 2017).

1.8 Normalized Eigenfactor (NE)

NE is an indicator which was published by JCR in the 2015 edition. Literally, it can be calculated by normalizing the ES metric. It aims to provide greater clarity among metrics, and its values present multiple influences on other journals in the same areas (Yu et al. 2017).

1.9 H-index

H-index is a relatively new metric which was proposed by Hirsch (2005), but till now it has enjoyed a widespread attention and acceptance. It can be applied at different levels: individual researchers, journals, or institutions. The H-index is defined as: “a scientist has index h if h of his or her Np papers have at least h citations each and the other (Np-h) papers have <= h citations each” Hirsch (2005). It indicates that h is the top h papers of a group that all have at least h citations (Mingers and Yang 2017). H-index encompass multi-facet of the evaluated individuals, journals, or institutions in their impact, citations, and productivity.

1.10 CiteScore

CiteScore is computed by calculating the average number of citations of every paper. It is ratio of citations in a year to publications in the three previous years, divided by the number of publications in the same year of a journal.

1.11 Scimago Journal Ranking (SJR)

SJR is developed based on the similar idea to ES. It is defined as the average number of weighted citations received in the selected year by the publications in the selected journal during the previous 3 years (SJR 2007). It relies on an iterative PageRank algorithm and all the publication of the journals are taken into account. Started by giving an identical amount of prestige to each journal, the SJR algorithm is then distribute the prestige in a process where journals transfer their achieved prestige to each other through citations (Ennas et al. 2015). Therefore, citations of journals are taken into account and the journals which cited by most prestigious journals are assigned with higher score.

1.12 Impact Per Publication (IPP)

IPP is defined as the ratio of citations in a year to publications in the three previous years, divided by the number of publications in those same years. It involves the same peer-reviewed publications only in both the numerator and denominator of the formula makes the evaluation of the journal fair (Scopus 2015). It is normalized by the number of publications but not the subject field.

1.13 Source Normalized Impact per Paper (SNIP)

SNIP is defined as the ratio of a journal’s citations per paper and the potential citations in the subject field (Scopus 2015). It can be obtained by normalizing the IPP on the number of citations in the corresponding subject field. It can be interpreted by measuring the contextual citation impact by weighting citations based on the total number of citations in a subject field (Ennas et al. 2015).

1.14 Part 2 Lists of 64 SCML journals

Sixty-four SCML journals in the empirical analysis are listed as follows in Table 5.

Table 5 Lists of all 64 SCML journals

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yuan, L., Li, J., Li, R. et al. Mapping the evaluation results between quantitative metrics and meta-synthesis from experts’ judgements: evidence from the Supply Chain Management and Logistics journals ranking. Soft Comput 24, 6227–6243 (2020). https://doi.org/10.1007/s00500-019-03837-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-019-03837-3

Keywords

Navigation