Skip to main content
Log in

An evaluation framework for software crowdsourcing

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

Recently software crowdsourcing has become an emerging area of software engineering. Few papers have presented a systematic analysis on the practices of software crowdsourcing. This paper first presents an evaluation framework to evaluate software crowdsourcing projects with respect to software quality, costs, diversity of solutions, and competition nature in crowdsourcing. Specifically, competitions are evaluated by the min-max relationship from game theory among participants where one party tries to minimize an objective function while the other party tries to maximize the same objective function. The paper then defines a game theory model to analyze the primary factors in these minmax competition rules that affect the nature of participation as well as the software quality. Finally, using the proposed evaluation framework, this paper illustrates two crowdsourcing processes, Harvard-TopCoder and AppStori. The framework demonstrates the sharp contrasts between both crowdsourcing processes as participants will have drastic behaviors in engaging these two projects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Doan A, Ramakrishnan R, Halevy A Y. Crowdsourcing systems on the World-Wide Web. Communications of the ACM, 2011, 54(4): 86–96

    Article  Google Scholar 

  2. Lakhani K, Garvin D, Lonstein E. Topcoder (a): developing software through crowdsourcing. Harvard Business School General Management Unit Case, 2010. Available at SSRN: http://ssrn.com/abstract=2002884

    Google Scholar 

  3. uTest. https://www.utest.com/

  4. Bosch J. From software product lines to software ecosystems. In: Proceedings of the 13th International Software Product Line Conference. 2009, 111–119

    Google Scholar 

  5. Jansen S, Finkelstein A, Brinkkemper S. A sense of community: a research agenda for software ecosystems. In: Proceedings of the 31st International Conference on Software Engineering-Companion Volume. 2009, 187–190

    Google Scholar 

  6. Apple Store Metrics. http://148apps.biz/app-store-metrics/, 2012

  7. AppStori. http://appstori.com/, 2012

  8. Kittur A. Crowdsourcing, collaboration and creativity. XRDS, 2010, 17(2): 22–26

    Article  Google Scholar 

  9. Constantinescu R, Iacob I M. Capability maturity model integration. Journal of Applied Quantitative Methods, 2007, 2(1): 187

    Google Scholar 

  10. Atwood M. Military standard: defense system software development. Department of Defense, USA, 1988

    Google Scholar 

  11. Schenk E, Guittard C. Crowdsourcing: what can be outsourced to the crowd, and why. In: Workshop on Open Source Innovation, Strasbourg, France. 2009

    Google Scholar 

  12. Tong R, Lakhani K. Public-private partnerships for organizing and executing prize-based competitions. Berkman Center Research Publication, 2012. Available at SSRN: http://ssrn.com/abstract=2083755

    Google Scholar 

  13. Algorithm Development Through Crowdsourcing. http://catalyst.harvard.edu/services/crowdsourcing/, 2012

  14. Archak N, Sundararajan A. Optimal design of crowdsourcing contests. In: Proceedings of the 30th International Conference on Information Systems. 2009, 1–16

    Google Scholar 

  15. Wu T W, Li W. Creative software crowdsourcing. Creative Software Crowdsourcing: From Components and Algorithm Development to Project Concept Formations, 2013

    Google Scholar 

  16. Baldwin C Y, Clark K B. The architecture of participation: does code architecture mitigate free riding in the open source development model? Management Science, 2006, 52(7): 1116–1127

    Article  MATH  Google Scholar 

  17. Rand D G, Dreber A, Ellingsen T, Fudenberg D, Nowak MA. Positive interactions promote public cooperation. Science, 2009, 325(5945): 1272–1275

    Article  MathSciNet  MATH  Google Scholar 

  18. Herbrich R, Minka T, Graepel T. TrueSkill™: a bayesian skill rating system. In: Proceedings of the 2006 Annual Conference of Advances in Neural Information Processing Systems. 2007, 19: 569–576

    Google Scholar 

  19. TopCoder Inc. http://apps.topcoder.com/wiki/display/tc/algorithm+competition+rating+system, 2013

  20. Apple App Store Review Guidelines. https://developer.apple.com/appstore/guidelines.html, 2010

  21. Archak N. Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder. com. In: Proceedings of the 19th International Conference on World Wide Web. 2010, 21–30

    Chapter  Google Scholar 

  22. DiPalantino D, Vojnovic M. Crowdsourcing and all-pay auctions. In: Proceedings of the 10th ACM Conference on Electronic Commerce. 2009, 119–128

    Google Scholar 

  23. Horton J J, Chilton L B. The labor economics of paid crowdsourcing. In: Proceedings of the 11th ACM Conference on Electronic Commerce. 2010, 209–218

    Google Scholar 

  24. Bacon D F, Chen Y, Parkes D, Rao M. A market-based approach to software evolution. In: Proceedings of the 24th ACM SIGPLAN Conference Companion on Object Oriented Programming Systems Languages and Applications. 2009, 973–980

    Chapter  Google Scholar 

  25. Bullinger A C, Moeslein K. Innovation contests-where are we? In: Proceedings of the 16th Americas Conference on Information Systems. 2010

    Google Scholar 

  26. Leimeister J M, Huber M, Bretschneider U, Krcmar H. Leveraging crowdsourcing: activation-supporting components for it-based ideas competition. Journal of Management Information Systems, 2009, 26(1): 197–224

    Article  Google Scholar 

  27. Kazman R, Chen HM. The metropolis model a new logic for development of crowdsourced systems. Communications of the ACM, 2009, 52(7): 76–84

    Article  Google Scholar 

  28. Bratvold D, Armstrong C. http://www.dailycrowdsource.com, 2013

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wenjun Wu.

Additional information

Wenjun Wu is a professor in the School of Computer Science and Engineering at the Beihang University. He was previously a research scientist from 2006 to 2010, at the Computation Institute (CI) at the University of Chicago and Argonne National Laboratory. He was a technical staff and post-doctoral research associate from 2002 to 2006, at the Community Grids Lab at the Indiana University. He received his BS, MS, and PhD degrees in computer science from Beihang University in 1994, 1997 and 2001, respectively. He published over 50 peer-review papers on journals and conferences. His research interests include crowdsourcing, green computing, cloud computing, eScience and cyberinfrastructure, and multimedia collaboration.

Wei-Tek Tsai is currently a professor in the School of Computing, Informatics, and Decision Systems Engineering at Arizona State University, USA. He received his PhD and MS in computer science from University of California at Berkeley, and SB in computer science and engineering from MIT, Cambridge. He has produced over 300 papers in various journals and conferences, two Best Paper awards, and awarded several Guest Professorships. His work has been supported by US Department of Defense, Department of Education, National Science Foundation, EU, and industrial companies such as Intel, Fujitsu, and Guidant. In the last ten years, he focused his energy on service-oriented computing and SaaS, and worked on various aspects of software engineering including requirements, architecture, testing, and maintenance.

Professor Wei Li is the member of Chinese Science Academy. He received his PhD in computer science from University of Edinburgh and BS degree in mathematics from Peking University. He is the director of State Key Lab of Software Environment Development and vice-chair of Chinese Institute of Electronic. He was president of Beihang University from 2002 to 2009. His research interests focus on theoretic computer science, including open logic for scientific discovery, formal semantics, revision calculus and program debugging. He has published over 100 papers and one book.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wu, W., Tsai, WT. & Li, W. An evaluation framework for software crowdsourcing. Front. Comput. Sci. 7, 694–709 (2013). https://doi.org/10.1007/s11704-013-2320-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-013-2320-2

Keywords

Navigation