Abstract
Quality assessment is a critical component in crowdsourcing-based software engineering (CBSE) as software products are developed by the crowd with unknown or varied skills and motivations. In this paper, we propose a novel metric called the project score to measure the performance of projects and the quality of products for competition-based software crowdsourcing development (CBSCD) activities. To the best of our knowledge, this is the first work to deal with the quality issue of CBSE in the perspective of projects instead of contests. In particular, we develop a hierarchical quality evaluation framework for CBSCD projects and come up with two metric aggregation models for project scores. The first model is a modified squale model that can locate the software modules of poor quality, and the second one is a clustering-based aggregation model, which takes different impacts of phases into account. To test the effectiveness of the proposed metrics, we conduct an empirical study on TopCoder, which is a famous CBSCD platform. Results show that the proposed project score is a strong indicator of the performance and product quality of CBSCD projects. We also find that the clustering-based aggregation model outperforms the Squale one by increasing the percentage of the performance evaluation criterion of aggregation models by an additional 29%. Our approach to quality assessment for CBSCD projects could potentially facilitate software managers to assess the overall quality of a crowdsourced project consisting of programming contests.
Similar content being viewed by others
References
LaToza T D, Hoek A V D. Crowdsourcing in software engineering: models, motivations, and challenges. IEEE Software, 2016, 33(1): 74–80
Li K, Xiao J C, Wang Y J, Wang Q. Analysis of the key factors for software quality in crowdsourcing development: an empirical study on topcoder.com. In: Proceedings of the 37th Annual Computer Software and Applications Conference. 2013, 812–817
Mao K, Yang Y, Li M S, Harman M. Pricing crowdsourcing-based software development tasks. In: Proceedings of International Conference on Software Engineering. 2013, 1205–1208
Archak N. Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder.com. In: Proceedings of the 19th International Conference on World Wide Web. 2010, 21–30
Mao K, Capra L, Harman M, Jia Y. A survey of the use of crowdsourcing in software engineering. Systems and Software, 2017, 126: 57–84
Wu W J, Tsai W T, Li W. An evaluation framework for software crowd-sourcing. Frontiers of Computer Science, 2013, 7(5): 694–709
Daniel F, Kucherbaev P, Cappiello C, Benatallah B, Allahbakhsh M. Quality control in crowdsourcing: a survey of quality attributes, assessment techniques, and assurance actions. ACM Computing Surveys, 2018, 51(1): 7
Chen X, Jiang H, Li X C, He T K, Chen Z Y. Automated quality assessment for crowdsourced test reports of mobile applications. In: Proceedings of the 25th International Conference on Software Analysis, Evolution and Reengineering. 2018, 368–379
Mao K, Yang Y, Wang Q, Jia Y, Harman M. Developer recommendation for crowdsourced software development tasks. In: Proceedings of IEEE Symposium on Service-Oriented System Engineering. 2015, 347–356
Lakhani K R, Garvin D A, Lonstein E. Topcoder (a): developing software through crowdsourcing. Harvard Business School General Management Vnit Case No. 610-032, 2010
Miguel J P, Mauricio D, Rodriguez G. A review of software quality models for the evaluation of software products. International Journal of Software Engineering and Applications, 2014, 5(6): 31–54
Mordal K, Anquetil N, Laval J, Serebrenik A, Vasilescu B, Ducasse S. Software quality metrics aggregation in industry. Journal of Software: Evolution and Process, 2013, 25(10): 1117–1135
ISO/IEC9126-1. Software engineering-product quality-part1: quality model. 1st ed. International Organization for Standardization, 2001
McCall J A, Richards P K, Walters G F. Factors in software quality. RADC TR-77369, 1977
Wang X, Wu W J, Hu Z H. Evaluation of software quality in the top-coder crowdsourcing environment. In: Proceedings of the 7th Annual Computing and Communication Workshop and Conference. 2017, 1–6
Kludt S R. Metrics and models in software quality engineering. Journal of Product Innovation Management, 1996, 13(2): 182–183
Breaux T D, Schaub F. Scaling requirements extraction to the crowd: experiments with privacy policies. In: Proceedings of the 22nd International Requirements Engineering Conference. 2014, 163–172
Hosseini M, Shahri A, Phalp K, Taylor J, Ali R, Dalpiaz F. Configuring crowdsourcing for requirements elicitation. In: Proceedings of the 9th International Conference on Research Challenges in Information Science. 2015, 133–138
Nebeling M, Leone S, Norrie M C. Crowdsourced web engineering and design. In: Proceedings of International Conference on Web Engineering. 2012, 31–45
Latoza T D, Chen M, Jiang L X, Zhao M Y, Hoek A V D. Borrowing from the crowd: a study of recombination in software design competitions. In: Proceedings of the 37th International Conference on Software Engineering. 2015, 551–562
Goldman M, Little G, Miller R C. Real-time collaborative coding in a web IDE. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. 2011, 155–164
Pham R, Singer L, Schneider K. Building test suites in social coding sites by leveraging drive-by commits. In: Proceedings of the 35th International Conference on Software Engineering. 2013, 1209–1212
Bishop J, Horspool R N, Xie T, Tillmann N, Halleux J D. Code hunt: experience with coding contests at scale. In: Proceedings of the 37th International Conference on Software Engineering. 2015, 398–407
Xie T. Cooperative testing and analysis: human-tool, tool-tool and human-human cooperations to get work done. In: Proceedings of the 12th International Working Conference on Source Code Analysis and Manipulation. 2012, 1–3
Tung Y H, Tseng S S. A novel approach to collaborative testing in a crowdsourcing environment. Journal of Systems and Software, 2013, 86(8): 2143–2153
Itkonen J. More testers-the effect of crowd size and time restriction in software testing. Information and Software Technology, 2013, 55(6): 986–1003
Barr E T, Harman M, Mcminn P, Shahbaz M, Yoo S. The oracle problem in software testing: a survey. IEEE Transactions on Software Engineering, 2015, 41(5): 507–525
Acknowledgements
This work was supported by grants from State Key Laboratory of Software Development Environment of BUAA of China (SKLSDE-2018ZX-03) and NSFC (Grant No. 61532004).
Author information
Authors and Affiliations
Corresponding author
Additional information
Zhenghui Hu received a BE degree in computer science from Zhejiang University of Technology, China in 2011. She is currently a PhD candidate in the School of Computer Science and Engineering at Beihang University, China. Her research interests include software engineering and crowdsourcing.
Wenjun Wu is a professor in the School of Computer Science and Engineering at Beihang University, China. He was previously a research scientist from 2006 to 2010, at the Computation Institute (CI) at the University of Chicago and Argonne National Laboratory, USA. He was a technical staff and post-doctoral research associate from 2002 to 2006, at the Community Grids Lab at the Indiana University, USA. He received his BS, MS and PhD degrees in Computer Science from Beihang University, China in 1994, 1997 and 2001, respectively. He published over fifty peer-review papers on journals and conferences. His research interests are in the areas of eScience and cyberinfrastructure, and multimedia collaboration.
Jie Luo received the PhD degree from Beihang University, China. He is currently a lecturer with the School of Computer Science and Engineering, Beihang University, China. His research interests include mathematical logic, knowledge reasoning, algorithms, crowd intelligence, and formal methods.
Xin Wang received his BE degree in computer science and technology from Anhui university, China in 2013. From 2014 to 2017, he studied in the School of Computer Science and Engineering at Beihang University, and received a MS degree, China in 2017. His research interests include computer software and theory, and crowdsourcing-based software engineering.
Boshu Li received his BE degree in computer science from Beihang University, China in 2015. From 2015 to 2018, he pursued a MS degree in software engineering in the School of Computer Science and Engineering at Beihang University, China. During his postgraduate period, his research focused on software crowd-sourcing and TopCoder.
Electronic supplementary material
Rights and permissions
About this article
Cite this article
Hu, Z., Wu, W., Luo, J. et al. Quality assessment in competition-based software crowdsourcing. Front. Comput. Sci. 14, 146207 (2020). https://doi.org/10.1007/s11704-019-8418-4
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11704-019-8418-4