Synonyms
Human factors; Standardization in crowdsourcing
Definition
Human factors relate to the behavior or characteristics of the human workers. In the context of crowdsourcing, human factors model the unpredictability and inconsistency in worker behavior, their volatility, asynchronous arrival and departure, their expertise or skills, their incentives (monetary or otherwise) for their participation, or even their collaborative synergy. For example, there is uncertainty regarding worker availability: workers can enter the crowdsourcing platform when they want, remain connected for as long as they like, and they may or may not accept a task. Uncertainty about a worker’s ability to complete a task depends on the worker’s expertise that may or not be known at the time a task is available. Similarly, there is uncertainty regarding the incentive (wage to be more precise) that workers may expect for achieving a task: worker wage may vary from worker to worker, even among workers with the...
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsRecommended Reading
Davidson SB, Khanna S, Milo T, Roy S. Using the crowd for top-k and group-by queries. In: Proceedings of the 16th International Conference on Database Theory; 2013. p. 225–36.
Deutch D, Greenshpan O, Kostenko B, Milo T. Declarative platform for data sourcing games. In: Proceedings of the 21st International World Wide Web Conference; 2012. p. 779–88.
Fleishman EA. Toward a taxonomy of human performance. Am Psychol. 1975; 30(12):1127.
Hassan U, Curry E. A capability requirements approach for predicting worker performance in crowdsourcing. In: Proceedings of the 9th International Conference on Collaborative Computing: Networking, Applications and Worksharing; 2013. p. 429–37.
Ipeirotis PG, Gabrilovich E. Quizz: targeted crowdsourcing with a billion (potential) users. In: Proceedings of the 23rd International World Wide Web Conference; 2014.
Joglekar M, Garcia-Molina H, Parameswaran A. Evaluating the crowd with confidence. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2013. p. 686–694.
Karger DR, Oh S, Shah D. Budget-optimal crowdsourcing using low-rank matrix approximations. In: Proceedings of the 49th Annual Allerton Conference on Communication, Control, and Computing; 2011. p. 284–91.
Marcus A, Wu E, Karger D, Madden S, Miller R. Human-powered sorts and joins. Proc VLDB Endowment. 2011; 5(1):13–24.
Parameswaran AG, Garcia-Molina H, Park H, Polyzotis N, Ramesh A, Widom J. Crowdscreen: algorithms for filtering data with humans. In: Proceedings of the ACM SIGMOD International Conference on Management of Data; 2012. p. 361–72.
Ramesh A, Parameswaran A, Garcia-Molina H, Polyzotis N. Identifying reliable workers swiftly. 2012.
Raykar VC, Yu S. Ranking annotators for crowdsourced labeling tasks. In: Advances in Neural Information Proceedings of the Systems 24, Proceedings of the 25th Annual Conference on Neural Information Proceedings of the Systems; 2011. p. 1809–17.
Raykar VC, Yu S, Zhao LH, Jerebko A, Florin C, Valadez GH, Bogoni L, Moy L. Supervised learning from multiple experts: whom to trust when everyone lies a bit. In: Proceedings of the 26th Annual International Conference on Machine Learning; 2009. p. 889–96.
Roy SB, Lykourentzou I, Thirumuruganathan S, Amer-Yahia S, Das G. Crowds, not drones: Modeling human factors in interactive crowdsourcing. In: Proceedings of the 1st VLDB Workshop on Databases and Crowdsourcing; 2013. p. 39–42.
Roy SB, Lykourentzou I, Thirumuruganathan S, Amer-Yahia S, Das G. Optimization in knowledge-intensive crowdsourcing. CoRR, abs/1401.1302, 2014.
Slivkins A, Vaughan JW. Online decision making in crowdsourcing markets: Theoretical challenges (position paper). CoRR, abs/1308.1746, 2013.
Sorokin A, Forsyth D. Utility data annotation with Amazon mechanical turk. Urbana, 2008; 51(61):820.
Tan CH, Agichtein E, Ipeirotis P, Gabrilovich E. Trust, but verify: predicting contribution quality for knowledge base construction and curation. In: Proceedings of the 7th ACM International Conference on Web Search and Data Mining; 2014.
Welinder P, Branson S, Perona P, Belongie SJ. The multidimensional wisdom of crowds. In: Advances in Neural Information Proceedings of the Systems 23, Proceedings of the 24th Annual Conference on Neural Information Proceedings of the Systems; 2010. p. 2424–32.
Welinder P, Perona P. Online crowdsourcing: rating annotators and obtaining cost-effective labels. In: Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; 2010. p. 25–32.
Whitehill J, Wu T-F, Bergsma J, Movellan JR, Ruvolo PL. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In: Advances in Neural Information Proceedings of the Systems 22, Proceedings of the 23rd Annual Conference on Neural Information Proceedings of the Systems; 2009. p. 2035–43.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Science+Business Media, LLC, part of Springer Nature
About this entry
Cite this entry
Amer-Yahia, S., Roy, S.B., Das, G., Lykourentzou, I., Rahman, H., Thirumuruganathan, S. (2018). Human Factors Modeling in Crowdsourcing. In: Liu, L., Özsu, M.T. (eds) Encyclopedia of Database Systems. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-8265-9_80659
Download citation
DOI: https://doi.org/10.1007/978-1-4614-8265-9_80659
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-8266-6
Online ISBN: 978-1-4614-8265-9
eBook Packages: Computer ScienceReference Module Computer Science and Engineering