skip to main content
10.1145/3018896.3018916acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccConference Proceedingsconference-collections
research-article

Towards a classification model for tasks in crowdsourcing

Published: 22 March 2017 Publication History

Abstract

Crowdsourcing is an increasingly popular approach for utilizing the power of the crowd in performing tasks that cannot be solved sufficiently by machines. Text annotation and image labeling are two examples of crowdsourcing tasks that are difficult to automate and human knowledge is often required. However, the quality of the obtained outcome from the crowdsourcing is still problematic. To obtain high-quality results, different quality control mechanisms should be applied to evaluate the different type of tasks. In a previous work, we present a task ontology-based model that can be utilized to identify which quality mechanism is most appropriate based on the task type. In this paper, we complement our previous work by providing a categorization of crowdsourcing tasks. That is, we define the most common task types in the crowdsourcing context. Then, we show how machine learning algorithms can be used to infer automatically the type of the crowdsourced task.

References

[1]
A. Kittur, E. H. Chi, and B. Suh, "Crowdsourcing user studies with Mechanical Turk," in Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI '08, 2008, pp. 453--156.
[2]
A. Doan, M. Franklin, D. Kossmann, and T. Kraska, "Crowdsourcing applications and platforms: A data management perspective," in The 37th International Conference on Very Large Data Bases, 2011, Seattle, Washington., 2011, pp. 1508--1509.
[3]
L. Overview and D. Schall, "Automatic Quality Management in Crowdsourcing," Technology and Society Magazine, IEEE, no. December, pp. 9--13, 2013.
[4]
"Amazon Mechanical Turk Requester UI Guide (API Version 2014-08-15)." .
[5]
P. G. Ipeirotis, "Analyzing the Amazon Mechanical Turk marketplace," ACM XRDS, vol. 17, no. 2, p. 16, Dec. 2010.
[6]
A. Kittur, J. Nickerson, M. Bernstein, E. Gerber, A. Shaw, J. Zimmerman, M. Lease, and J. Horton, "The future of crowd work," in proceedings of the 2013 conference on Computer supported cooperative work (CSCW '13), 2013, pp. 1301--1317.
[7]
M. Allahbakhsh, B. Benatallah, A. Ignjatovic, H. Motahari-Nezhad, E. Bertino, and S. Dustdar, "Quality Control in Crowdsourcing Systems: Issues and Directions," IEEE Internet Comput., vol. 17, no. 2, pp. 76--81, 2013.
[8]
Y. Zhao and Q. Zhu, "Evaluation on crowdsourcing research: Current status and future direction," Inf. Syst. Front., vol. 16, no. 3, pp. 417--434, Apr. 2012.
[9]
R. Alabdujabbar and H. Al-Dossari, "A Task Ontology-based Model for Quality Control in Crowdsourcing Systems," in Proceedings of the ACM Research in Adaptive and Convergent Systems (ACM RACS 2016), October 11-14, Odense, Denmark.
[10]
M. Hosseini, K. Phalp, J. Taylor, and R. Ali, "The four pillars of crowdsourcing: A reference model," 2014 IEEE Eighth Int. Conf. Res. Challenges Inf. Sci. (RCIS), Marrakech, May 28-30, pp. 1--12, 2014.
[11]
J. Howe, Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business. New York, NY: Crown Business, 2008.
[12]
J. R. Corney, C. Torres-Sánchez, A. P. Jagadeesan, and W. C. Regli, "Outsourcing labour to the cloud," Int. J. Innov. Sustain. Dev., vol. 4, no. 4, pp. 294--313, 2009.
[13]
A. Doan, R. Ramakrishnan, and A. Y. Halevy, "Crowdsourcing systems on the World-Wide Web," Commun. ACM, vol. 54, no. 4, pp. 86--96, 2011.
[14]
E. Schenk and C. Guittard, "Towards a characterization of crowdsourcing practices," J. Innov. Econ., vol. 7, no. 1, pp. 93--107, 2011.
[15]
A. Quinn and B. Bederson, "Human computation: a survey and taxonomy of a growing field," in CHI '11 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011, pp. 1403--1412.
[16]
G. D. Saxton, O. Oh, and R. Kishore, "Rules of Crowdsourcing: Models, Issues, and Systems of Control," Inf. Syst. Manag., vol. 30, no. 1, pp. 2--20, Jan. 2013.
[17]
D. C. Brabham, Crowdsourcing. The MIT Press., 2013.
[18]
J. Pedersen, D. Kocsis, A. Tripathi, A. Tarrell, A. Weerakoon, N. Tahmasbi, J. Xiong, W. Deng, O. Oh, and G. J. De Vreede, "Conceptual foundations of crowdsourcing: A review of IS research," Proc. 46th Hawaii Int. Conf. Syst. Sci. (HICSS), January, pp. 579--588, 2013.
[19]
R. Nakatsu, E. Grossman, and C. Iacovou, "A taxonomy of crowdsourcing based on task complexity," J. Inf. Sci., vol. 40, no. 6, pp. 823--834, 2014.
[20]
U. Gadiraju, R. Kawase, and S. Dietze, "A Taxonomy of Microtasks on the Web," in Proceedings of the 25th ACM conference on Hypertext and social media, 2014, pp. 218--223.
[21]
M. Hosseini, A. Shahri, K. Phalp, and R. Ali, "Recommendations on Adapting Crowdsourcing to Problem Types," IEEE Ninth Int. Conf. Res. Challenges Inf. Sci. 15), 13-15 May, Athens, Greece., 2015.
[22]
N. Luz, N. Silva, and P. Novais, "A survey of task-oriented crowdsourcing," Artif. Intell. Rev., vol. 44, no. 2, pp. 187--213, Aug. 2015.
[23]
E. Estellés-Arolas, R. Navarro-Giner, and F. González-Ladrón-de-Guevara, "Crowdsourcing Fundamentals: Definition and Typology," in Advances in Crowdsourcing, Springer International Publishing, 2015, pp. 33--48.
[24]
K. Krippendorff, "Content Analysis: An Introduction to Its Methodology," 2nd ed., SAGE Publications, Thousand Oaks,. p. 440, 2004.
[25]
M. Lease and O. Alonso, "Crowdsourcing and Human Computation, Introduction Matthew," Encycl. Soc. Netw. Anal. Min., 2014.
[26]
D. Schall, Service-Oriented Crowdsourcing, 2012 editi. Springer, 2012.
[27]
D. Chang, C. H. Chen, and K. M. Lee, "A crowdsourcing development approach based on a neuro-fuzzy network for creating innovative product concepts," Neurocomputing, vol. 142, pp. 60--72, 2014.
[28]
MTurk, "MTurk: Amazon Mechanical Turk," 2016. .
[29]
M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. Witten, "The WEKA Data Mining Software: An Update," SIGKDD Explor., vol. 11, no. 1, 2009.

Cited By

View all
  • (2023)Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine InteractionApplied Sciences10.3390/app1304219813:4(2198)Online publication date: 8-Feb-2023
  • (2023)A landscape of participatory platform architectures: Ideas, decisions, and mappingInformation Polity10.3233/IP-21152028:3(341-358)Online publication date: 5-Sep-2023
  • (2022)Analysis on Potential Use of Crowdsourcing in Different Domain Using MetasynthesisEmerging Technologies in Data Mining and Information Security10.1007/978-981-19-4193-1_73(747-756)Online publication date: 29-Sep-2022

Index Terms

  1. Towards a classification model for tasks in crowdsourcing

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICC '17: Proceedings of the Second International Conference on Internet of things, Data and Cloud Computing
    March 2017
    1349 pages
    ISBN:9781450347747
    DOI:10.1145/3018896
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 March 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. amazon Mturk
    2. classification
    3. crowdsourcing
    4. quality control
    5. task

    Qualifiers

    • Research-article

    Conference

    ICC '17

    Acceptance Rates

    ICC '17 Paper Acceptance Rate 213 of 590 submissions, 36%;
    Overall Acceptance Rate 213 of 590 submissions, 36%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 22 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Designing for Hybrid Intelligence: A Taxonomy and Survey of Crowd-Machine InteractionApplied Sciences10.3390/app1304219813:4(2198)Online publication date: 8-Feb-2023
    • (2023)A landscape of participatory platform architectures: Ideas, decisions, and mappingInformation Polity10.3233/IP-21152028:3(341-358)Online publication date: 5-Sep-2023
    • (2022)Analysis on Potential Use of Crowdsourcing in Different Domain Using MetasynthesisEmerging Technologies in Data Mining and Information Security10.1007/978-981-19-4193-1_73(747-756)Online publication date: 29-Sep-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media