Abstract
This paper presents a meta-analysis of the publications from all 18 previous editions of WSCAD in order to understand how performance results are validated and reported. This meta-analysis extract from these papers terms (keywords) belonging to three categories: statistics, metrics and tests. From all 426 papers analyzed, 93% referred at least one of the terms considered, indicating that there is a concern that results should be reported in order to the paper be considered relevant for this conference. Nevertheless, this analysis shows that only 3% of the papers applies reliable statistical tests to validate them. This paper depicts the meta-analysis achieved and proposes a direction to promote the adoption of a guideline to improve the results reporting in this conference and other with related subjects.
This paper was realized with support of the National Program of Academic Cooperation from CAPES/Brasil.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
Preferred Reporting Items for Systematic Reviews and Meta-Analyzes.
- 4.
Strengthening the Reporting of Observational Studies in Epidemiology.
- 5.
Consolidated Standards of Reporting Trials.
- 6.
Transparency and Openness Promotion.
- 7.
- 8.
- 9.
- 10.
References
Adler, S., Schmitt, S., Wolter, K., Kyas, M.: A survey of experimental evaluation in indoor localization research. In: 2015 International Conference on Indoor Positioning and Indoor Navigation (IPIN), pp. 1–10. IEEE (2015)
Bukh, P.N.D.: The Art of Computer Systems Performance Analysis, Techniques for Experimental Design, Measurement, Simulation and Modeling (1992)
Dean, A., Voss, D., Draguljić, D., et al.: Design and Analysis of Experiments, vol. 1. Springer, New York (1999). https://doi.org/10.1007/b97673
Ebrahim, S., Clarke, M.: STROBE: new standards for reporting observational epidemiology, a chance to improve (2007)
Fortier, P., Michel, H.: Computer Systems Performance Evaluation and Prediction. Elsevier, Amsterdam (2003)
de França, B.B.N., Travassos, G.H.: Experimentation with dynamic simulation models in software engineering: planning and reporting guidelines. Empir. Softw. Eng. 21(3), 1302–1345 (2016)
Jain, R.: The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling. Wiley, Hoboken (1990)
Malta, M., Cardoso, L.O., Bastos, F.I., Magnanini, M.M.F., da Silva, C.M.F.P.: Iniciativa strobe: subsÃdios para a comunicação de estudos observacionais. Revista de Saúde Pública 44, 559–565 (2010)
University of Minnesota: Types of statistical testes (2019). https://cyfar.org/types-statistical-tests. Accessed 23 May 2019
Moher, D., et al.: Consort 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int. J. Surg. 10(1), 28–55 (2012)
Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G.: Preferred reporting items for systematic reviews and meta-analyses: the prisma statement. Ann. Intern. Med. 151(4), 264–269 (2009)
Montgomery, D.C.: Design and Analysis of Experiments. Wiley, Hoboken (2017)
Munafò, M.R., et al.: A manifesto for reproducible science. Nat. Hum. Behav. 1(1), 0021 (2017)
Navidi, W.: Probabilidade e estatÃstica para ciências exatas. AMGH (2012)
Neto, B.B., Scarminio, I.S., Bruns, R.E.: Como Fazer Experimentos-: Pesquisa eDesenvolvimento na Ciência e na Indústria. Bookman (2010)
Nosek, B.A., et al.: Promoting an open research culture. Science 348(6242), 1422–1425 (2015)
Osorio, A., Dias, M., Cavalheiro, G.G.H.: WSCAD: uma meta-analise. In: WSCAD 2018, October 2018. http://wscad.sbc.org.br/2018/anais/anais-wscad-2018.pdf
Prechelt, L.: A quantitative study of experimental evaluations of neural network learning algorithms: current research practice. IEEE Trans. Neural Netw. 6, 457–462 (1994)
Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empir. Softw. Eng. 14(2), 131 (2009)
Stevens, A., et al.: Relation of completeness of reporting of health research to journals’ endorsement of reporting guidelines: systematic review. BMJ 348, g3804 (2014)
Tedre, M., Moisseinen, N.: Experiments in computing: a survey. Sci. World J. 2014 (2014)
Tichy, W.F., Lukowicz, P., Prechelt, L., Heinz, E.A.: Experimental evaluation in computer science: a quantitative study. J. Syst. Softw. 28(1), 9–18 (1995)
Wainer, J., Barsottini, C.G.N., Lacerda, D., de Marco, L.R.M.: Empirical evaluation in computer science research published by ACM. Inf. Softw. Technol. 51(6), 1081–1085 (2009)
Wainer, J., et al.: Métodos de pesquisa quantitativa e qualitativa para a ciência da computação. Atualização em informática 1, 221–262 (2007)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Osorio, A., Dias, M., Cavalheiro, G.G.H. (2020). Tangible Assets to Improve Research Quality: A Meta Analysis Case Study. In: Bianchini, C., Osthoff, C., Souza, P., Ferreira, R. (eds) High Performance Computing Systems. WSCAD 2018. Communications in Computer and Information Science, vol 1171. Springer, Cham. https://doi.org/10.1007/978-3-030-41050-6_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-41050-6_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-41049-0
Online ISBN: 978-3-030-41050-6
eBook Packages: Computer ScienceComputer Science (R0)