skip to main content
research-article

Passerby Crowdsourcing: Workers' Behavior and Data Quality Management

Published: 27 December 2018 Publication History

Abstract

Worker recruitment is one of the important problems in crowdsourcing, and many proposals have been presented for placing equipment in physical spaces for recruiting workers. One of the essential challenges of the approach is how to keep people attracted because those who perform tasks at first gradually lose interest and do not access the equipment. This study uses a different approach to the worker recruitment problem. In our approach, we dive into people's personal spaces by projecting task images on the floor, thereby allowing the passersby to effortlessly access tasks while walking. The problem then changes from how to keep people engaged to how to manage data quality because many passersby unconsciously or intentionally walk through the task screen on the floor without doing the task, which produces unintended results. We explore a machine-learning approach to select only the intended results and manage the data quality. The system assesses the workers' intention from their behavior. We identify the features for classifiers based on our observations of the passersby. We then conduct extensive evaluations with real data. The results show that the features are effective in practice, and the classifiers improve the data quality.

Supplementary Material

iwamoto (iwamoto.zip)
Supplemental movie, appendix, image and software files for, Passerby Crowdsourcing: Workers' Behavior and Data Quality Management

References

[1]
2018. "Crowd4u". http://crowd4u.org/
[2]
Harry Brignull and Yvonne Rogers. 2003. Enticing people to interact with large public displays in public spaces. In Proceedings of INTERACT, Vol. 3.17--24.
[3]
Florian Daniel, Pavel Kucherbaev, Cinzia Cappiello, Boualem Benatallah, and Mohammad Allahbakhsh. 2018. Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions. ACM Comput. Surv. 51, 1, Article 7 (Jan. 2018), 40 pages.
[4]
Jia Deng, Jonathan Krause, and Li Fei-Fei. 2013. Fine-grained crowdsourcing for fine-grained recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 580--587.
[5]
Alan Dix, Jennifer G Sheridan, Stuart Reeves, Steve Benford, and Claire O'Malley. 2005. Formalising performative interaction. In International Workshop on Design, Specification, and Verification of Interactive Systems. Springer, 15--25.
[6]
Anhai Doan, Raghu Ramakrishnan, and Alon Y Halevy. 2011. Crowdsourcing systems on the world wide web. Commun. ACM 54, 4 (2011), 86--96.
[7]
Carsten Eickhoff, Christopher G Harris, Arjen P de Vries, and Padmini Srinivasan. 2012. Quality through flow and immersion: gamifying crowdsourced relevance assessments. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval. ACM, 871--880.
[8]
Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1631--1640.
[9]
Jorge Goncalves, Denzil Ferreira, Simo Hosio, Yong Liu, Jakob Rogstadius, Hannu Kukka, and Vassilis Kostakos. 2013. Crowdsourcing on the spot: altruistic use of public displays, feasibility, performance, and behaviours. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing. ACM, 753--762.
[10]
Jorge Goncalves, Simo Hosio, Denzil Ferreira, and Vassilis Kostakos. 2014. Game of words: tagging places through crowdsourcing on public displays. In Proceedings of the 2014 conference on Designing interactive systems. ACM, 705--714.
[11]
Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: engaging local crowds to perform expert work via physical kiosks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1539--1548.
[12]
Bernd Huber, Joong Ho Lee, and Ji-Hyung Park. 2015. Detecting User Intention at Public Displays from Foot Positions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 3899--3902.
[13]
Kosetsu Ikeda, Atsuyuki Morishima Habibur Rahman, Senjuti Basu Roy, Saravanan Thirumuruganathan, Sihem Amer-Yahia, and Gau-tam Das. 2016. Collaborative crowdsourcing with crowd4U. Proceedings of the VLDB Endowment 9, 13 (2016), 1497--1500.
[14]
Lisa Koeman, Vaiva Kalnikaité, and Yvonne Rogers. 2015. Everyone is talking about it!: A distributed approach to urban voting technology and visualisations. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 3127--3136.
[15]
Thomas Ludwig, Christoph Kotthaus, Christian Reuter, Sören van Dongen, and Volkmar Pipek. 2017. Situated crowdsourcing during disasters: Managing the tasks of spontaneous volunteers through public displays. International Journal of Human-Computer Studies 102 (2017), 103--121.
[16]
Hao Ma, Raman Chandrasekar, Chris Quirk, and Abhishek Gupta. 2009. Improving search engines using human computation games. In Proceedings of the 18th ACM conference on Information and knowledge management. ACM, 275--284.
[17]
Atsuyuki Morishima, Shun Fukusumi, and Hiroyuki Kitagawa. 2016. Cylog/game aspect: An approach to separation of concerns in crowdsourced data management. Information Systems 62 (2016), 170--184.
[18]
Atsuyuki Morishima, Kiyonori Nagasaki, Kosetsu Ikeda, and Takanori Kawashima. 2016. The Hondigi/L-Crowd Joint Project: A Microtask-based Approach for Transcribing Japanese Texts. In International Conference on Digital Libraries (ICDL) 2016: Smart Future: Knowledge Trends that will Change the World. The Energy and Resources Institute (TERI), 108.
[19]
Atsuyuki Morishima, Norihide Shinagawa, Tomomi Mitsuishi, Hideto Aoki, and Shun Fukusumi. 2012. CyLog/Crowd4U: A declarative platform for complex data-centric crowdsourcing. Proceedings of the VLDB Endowment 5, 12 (2012), 1918--1921.
[20]
Genevieve Patterson, Grant van Horn, Serge J Belongie, Pietro Perona, and James Hays. 2015. Tropel: Crowdsourcing Detectors with Minimal Training. In HCOMP. 150--159.
[21]
Julian Ryall. 2008. Japan harnesses energy from footsteps. The Telegraph 12 (2008).
[22]
Jeffrey M Rzeszotarski and Aniket Kittur. 2011. Instrumenting the crowd: using implicit behavioral measures to predict task performance. In Proceedings of the 24th annual ACM symposium on User interface software and technology. ACM, 13--22.
[23]
Jeffrey M Rzeszotarski and Aniket Kittur. 2012. CrowdScape: interactively visualizing user behavior and output. In Proceedings of the 25th annual ACM symposium on User interface software and technology. ACM, 55--62.
[24]
Rikuya Suzuki, Tetsuo Sakaguchi, Masaki Matsubara, Hiroyuki Kitagawa, and Atsuyuki Morishima. 2018. CrowdSheet: An Easy-To-Use One-Stop Tool for Writing and Executing Complex Crowdsourcing. In International Conference on Advanced Information Systems Engineering. Springer, 137--153.
[25]
Kathleen Tuite. 2014. GWAPs: Games with a problem. In FDG.
[26]
Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein. 2014. Twitch Crowdsourcing: Crowd Contributions in Short Bursts of Time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14). ACM, New York, NY, USA, 3645--3654.
[27]
Nina Valkanova, Robert Walter, Andrew Vande Moere, and Jörg Müller. 2014. MyPosition: sparking civic discourse by a public interactive poll visualization. In Proceedings of the 17th ACM conference on Computer supported co-operative work & social computing. ACM, 1323--1332.
[28]
Luis von Ahn. 2013. Duolingo: learn a language for free while helping to translate the web. In Proceedings of the 2013 international conference on Intelligent user interfaces. ACM, 1--2.
[29]
Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 319--326.
[30]
Luis von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. 2008. recaptcha: Human-based character recognition via web security measures. Science 321, 5895 (2008), 1465--1468.

Cited By

View all
  • (2023)COME: Learning to Coordinate Crowdsourcing and Regular Couriers for Offline Delivery During Online Mega Sale Days2023 IEEE 39th International Conference on Data Engineering (ICDE)10.1109/ICDE55515.2023.00240(3126-3139)Online publication date: Apr-2023
  • (2021)CrowdActProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34322225:1(1-32)Online publication date: 30-Mar-2021
  • (2020)Understanding User Behavior in Car Sharing Services Through The Lens of MobilityProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34322004:4(1-30)Online publication date: 18-Dec-2020
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 2, Issue 4
December 2018
1169 pages
EISSN:2474-9567
DOI:10.1145/3301777
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 December 2018
Accepted: 01 October 2018
Revised: 01 August 2018
Received: 01 February 2018
Published in IMWUT Volume 2, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Crowdsourcing
  2. Long-term practical use
  3. Worker recruitment

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2023)COME: Learning to Coordinate Crowdsourcing and Regular Couriers for Offline Delivery During Online Mega Sale Days2023 IEEE 39th International Conference on Data Engineering (ICDE)10.1109/ICDE55515.2023.00240(3126-3139)Online publication date: Apr-2023
  • (2021)CrowdActProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34322225:1(1-32)Online publication date: 30-Mar-2021
  • (2020)Understanding User Behavior in Car Sharing Services Through The Lens of MobilityProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34322004:4(1-30)Online publication date: 18-Dec-2020
  • (2020)Effects of Cognitive Consistency in Microtask Design with only Auditory InformationUniversal Access in Human-Computer Interaction. Applications and Practice10.1007/978-3-030-49108-6_33(466-476)Online publication date: 10-Jul-2020
  • (2019)A Microtask Approach to Identifying Incomprehension for Facilitating Peer Learning2019 IEEE International Conference on Big Data (Big Data)10.1109/BigData47090.2019.9005446(4624-4627)Online publication date: Dec-2019

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media