skip to main content
10.1145/3127005.3127012acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Context-Centric Pricing: Early Pricing Models for Software Crowdsourcing Tasks

Authors Info & Claims
Published:08 November 2017Publication History

ABSTRACT

In software crowdsourcing, task price is one of the most important incentive to attract broad worker participation and contribution. Underestimating or overestimating a task's price may lead to task starvation or resource inefficiency. Nevertheless, few studies have addressed pricing support in software crowdsourcing. In this study, we propose Context-Centric Pricing approach to support software crowdsourcing pricing based on limited information available at early planning phase, i.e. textual task requirements. In the proposed approach, the global models include a list of 6 pricing factors and employ different natural language processing techniques for prediction; in addition, local models can be derived w.r.t. more relevant context, i.e. a set of similar tasks identified based on Topic modeling and we evaluate 7 predictive models. The proposed approach is evaluated on 450 software tasks extracted from TopCoder, the largest software crowdsourcing platform. The results show that: 1) the proposed models can be used at early crowdsourcing planning phase, when information on traditional metrics are not available; 2) the best model achieves 65% in accuracy measure; 3) the local model exhibits 27% increases superior to the global model. The proposed work can stimulate future research into crowdsourcing pricing estimation and inform ideas for crowdsourcing decision-makers.

References

  1. Singer, Yaron, and Manas Mittal. "Pricing Tasks in Online Labor Markets." Human computation. 2011.Google ScholarGoogle Scholar
  2. Singer, Yaron. "Budget feasible mechanisms." Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on. IEEE, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ke Mao, Ye Yang, Mingshu Li, Mark Harman. "Pricing Crowdsourcing-based Software Development Tasks". In Proceedings of the 35th International Conference on Software Engineering (ICSE'13). NIER Track, pp. 1221--1224, 2013.Google ScholarGoogle Scholar
  4. Howe, Jeff. "The rise of crowdsourcing." Wired magazine 14.6 (2006):1--4Google ScholarGoogle Scholar
  5. Gonen, Rica, et al. "Increased efficiency through pricing in online labor markets." Journal of Electronic Commerce Research 15.1 (2014): 58.Google ScholarGoogle Scholar
  6. Difallah, Djellel Eddine, et al. "Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement." Second AAAI Conference on Human Computation and Crowdsourcing. 2014.Google ScholarGoogle Scholar
  7. Wang, Jing, Panagiotis G. Ipeirotis, and Foster Provost. "Quality-based pricing for crowdsourced workers." (2013).Google ScholarGoogle Scholar
  8. Harwell, Richard, et al. "What is a Requirement?" INCOSE International Symposium. Vol. 3. No. 1. 1993.Google ScholarGoogle Scholar
  9. Humayun, Mamoona, and Cui Gang. "Estimating effort in global software development projects using machine learning techniques." International Journal of Information and Education Technology 2.3 (2012): 208.Google ScholarGoogle ScholarCross RefCross Ref
  10. El Bajta, Manal. "Analogy-based software development effort estimation in global software development." 2015 IEEE 10th International Conference on Global Software Engineering Workshops. IEEE, 2015 Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Azzeh, Mohammad, and Ali Bou Nassif. "Fuzzy Model Tree For Early Effort Estimation." Machine Learning and Applications (ICMLA), 2013 12th International Conference on. Vol. 2. IEEE, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. B. Baskeles, B. Turhan, and A. Bener, "Software effort estimation using machine learning methods," In Proc. of 22nd international symposium on computer and information sciences, 2007. Google ScholarGoogle ScholarCross RefCross Ref
  13. A. Heiat, "Comparison of artificial neural network and regression models for estimating software development effort," Information and Software Technology,pp 911--922, 2004.Google ScholarGoogle Scholar
  14. E. Jun. and J. Lee, "Quasi-optimal case-selective neural network model for software effort estimation," Expert Systems with Applications, pp. 1--14,2001. Google ScholarGoogle ScholarCross RefCross Ref
  15. A. Oliveira, "Estimation of software project effort with support vector regression," Neurocomputing, pp. 1749--1753, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. H. Park. and S. Baek, "An empirical validation of a neural network model for software effort estimation," Expert Systems with Applications, vol. 35, no. 3, pp. 929--937, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Srivastava, Ashok N., and Mehran Sahami, eds. Text mining: Classification, clustering, and applications. CRC Press, 2009. Google ScholarGoogle ScholarCross RefCross Ref
  18. McCallum, Andrew Kachites. "MALLET: A Machine Learning for Language Toolkit." http://mallet.cs.umass.edu. 2002.Google ScholarGoogle Scholar
  19. Menzies, Tim, and Andrian Marcus. "Automated severity assessment of software defect reports." Software Maintenance, 2008. ICSM 2008. IEEE International Conference on. IEEE, 2008. Google ScholarGoogle ScholarCross RefCross Ref
  20. Quinlan, J. Ross. C4. 5: programs for machine learning. Elsevier, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Pedregosa, Fabian, et al. "Scikit-learn: Machine learning in Python." Journal of Machine Learning Research 12.Oct (2011): 2825--2830.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Forman, George. "An extensive empirical study of feature selection metrics for text classification." Journal of machine learning research 3.Mar (2003): 1289--1305.Google ScholarGoogle Scholar
  23. Yang, Yiming, and Xin Liu. "A re-examination of text categorization methods" Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Joachims, Thorsten. "Text categorization with support vector machines:Learning with many relevant features." European conference on machine learning. Springer Berlin Heidelberg, 1998.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Alelyani, Turki, and Ye Yang. "Software crowdsourcing reliability: an empirical study on developers behavior." Proceedings of the 2nd International Workshop on Software Analytics. ACM, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Menzies et al. "Local vs. global models for effort estimation and defect prediction." Automated Software Engineering (ASE), 2011 26th IEEE/ACM International Conference on. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Context-Centric Pricing: Early Pricing Models for Software Crowdsourcing Tasks

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          PROMISE: Proceedings of the 13th International Conference on Predictive Models and Data Analytics in Software Engineering
          November 2017
          120 pages
          ISBN:9781450353052
          DOI:10.1145/3127005

          Copyright © 2017 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 8 November 2017

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          PROMISE Paper Acceptance Rate12of25submissions,48%Overall Acceptance Rate64of125submissions,51%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader