Skip to main content

Crowdsourcing in HCI Research

  • Chapter
  • First Online:

Abstract

By recruiting large numbers of people online to perform small tasks, researchers can perform important assessments that are hard to obtain otherwise, at a very reasonable cost and speed. These tasks include assessing quality, reading characters that OCR readers can’t decipher, labeling photographs, and even answering questions in a survey. Care must be taken in using crowdsourcing, however, because some people “game” the system or simply misunderstand the task. However, there are techniques to minimize or detect questionable data.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.mturk.com/.

  2. 2.

    http://www.readwriteweb.com/archives/linkedin_updates_cardmunch_iphone_app.php.

  3. 3.

    http://www.wikipedia.org/.

  4. 4.

    http://beamartian.jpl.nasa.gov/.

  5. 5.

    http://www.google.com/insights/consumersurveys/.

  6. 6.

    http://www.odesk.com/.

  7. 7.

    http://www.crowdflower.com/.

  8. 8.

    http://turkopticon.differenceengines.com/.

  9. 9.

    http://forum.mturk.com/.

  10. 10.

    If for some reason the researcher wishes to expire the HIT early, this is possible to do from both the web interface and the API. Likewise, HITs can also be extended using either method.

References

  • André, P., Bernstein M., & Luther K. (2012). Who gives a tweet?: Evaluating microblog content value. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (pp. 471–474). New York, NY: ACM

    Google Scholar 

  • Bernstein, M. S., Brandt, J., Miller, R. C., & Karger, D. R. (2011). Crowds in two seconds: Enabling realtime crowd-powered interfaces. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (pp. 33–42). New York, NY: ACM

    Google Scholar 

  • Bernstein, M. S., Little G., Miller R. C., Hartmann B., Ackerman M. S., Karger D. R., et al. (2010). Soylent: A word processor with a crowd inside. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology (pp. 313–322). New York, NY: ACM

    Google Scholar 

  • Bigham, J. P., Jayant, C., Ji, H., Little, G., Miller, A., Miller, R. C., et al. (2010). VizWiz: Nearly real-time answers to visual questions. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology (pp. 333–342). New York, NY: ACM

    Google Scholar 

  • Christin, N., Egelman, S., Vidas, T., & Grossklags, J. (2011). It’s all about the Benjamins: An empirical study on incentivizing users to ignore security advice. Financial Cryptography and Data Security 16–30

    Google Scholar 

  • Dow, S. P., Glassco, A., Kass, J., Schwarz, M., Schwartz, D. L., & Klemmer, S. R. (2010). Parallel prototyping leads to better design results, more divergence, and increased self-efficacy. ACM Transactions on Computer-Human Interaction (TOCHI), 17(4), 18.

    Article  Google Scholar 

  • Dow, S., Kulkarni, A., Klemmer, S., & Hartmann, B. (2012). Shepherding the Crowd Yields better work. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (pp. 1013–1022). New York, NY: ACM

    Google Scholar 

  • Egelman, S., Molnar, D., Christin, N., Acquisti, A., Herley, C., & Krishnamurthi, S. (2010). Please continue to hold: An empirical study on user tolerance of security delays. In Proceedings (Online) of the 9th Workshop on Economics of Information Security

    Google Scholar 

  • Fowler, F. J., Jr. (1995). Improving survey questions: Design and evaluation (Vol. 38). Thousand Oaks, CA: Sage. Incorporated.

    Google Scholar 

  • Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438(7070), 900–901.

    Article  Google Scholar 

  • Grier, D. A. (2005). When computers were human (Vol. 316). Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Hand, E. (2010). Citizen science: People power. Nature, 466(7307), 685.

    Article  Google Scholar 

  • Heer, J., & Bostock, M. (2010). Crowdsourcing graphical perception: Using mechanical turk to assess visualization design. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (pp. 203–212). New York, NY: ACM

    Google Scholar 

  • Ipeirotis, P. (2008). Mechanical turk: Demographics. Retrieved September 15, 2009, from http://behind-the-enemy-lines.blogspot.com/2008/03/mechanical-turk-demographics.html

  • Ipeirotis, P. (2010a). Demographics of mechanical turk. Working Paper, CeDER-10-01. http://archive.nyu.edu/handle/2451/29585

  • Ipeirotis, P. (2010b). The new demographics of mechanical turk. Retrieved July 2, 2012, from http://www.behind-the-enemy-lines.com/2010/03/new-demographics-of-mechanical-turk.html

  • Jakobsson, M. (2009). Experimenting on mechanical turk: 5 How tos. Retrieved November 4, 2009, from http://blogs.parc.com/blog/2009/07/experimenting-on-mechanical-turk-5-how-tos/

  • Kittur, A., Chi, E. H., & Suh, B. (2008). Crowdsourcing user studies with mechanical turk. In Proceedings of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (pp. 453–456). New York, NY: ACM

    Google Scholar 

  • Kittur, A., Smus, B., Khamkar, S., & Kraut, R. E. (2011). Crowdforge: Crowdsourcing complex work. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (pp. 43–52). New York, NY: ACM

    Google Scholar 

  • Kittur, A., Suh, B., & Chi, E. H. (2008). Can you ever trust a Wiki?: Impacting perceived trustworthiness in Wikipedia. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work (pp. 477–480). New York, NY: ACM

    Google Scholar 

  • Komanduri, S., Shay, R., Kelley, P. G., Mazurek, M. L., Bauer, L., Christin, N., et al. (2011). Of passwords and people: Measuring the effect of password-composition policies. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems (pp. 2595–2604). New York, NY: ACM

    Google Scholar 

  • Kulkarni, A., Can, M., & Hartmann, B. (2012). Collaboratively crowdsourcing workflows with turkomatic. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (pp. 1003–1012). New York, NY: ACM

    Google Scholar 

  • Kumar, R., Kim, J., & Klemmer, S. R. (2009). Automatic retargeting of web page content. In Proceedings of the 27th International Conference (Extended Abstracts) on Human Factors in Computing Systems (pp. 4237–4242). New York, NY: ACM

    Google Scholar 

  • Landsberger, H. A. (1958). Hawthorne revisited: Management and the worker, its critics, and developments in human relations in industry. Ithaca, NY: Cornell University.

    Google Scholar 

  • Lasecki, W. S., Murray, K. I., White, S., Miller, R. C., & Bigham, J. P. (2011). Real-time crowd control of existing interfaces. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (pp. 23–32). New York, NY: ACM

    Google Scholar 

  • Lewis, S., Dontcheva, M., & Gerber, E. (2011). Affective computational priming and creativity. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems (pp. 735–744). New York, NY: ACM

    Google Scholar 

  • Little, G., Chilton, L. B., Goldman, M., & Miller, R. C. (2010a). TurKit: Human computation algorithms on mechanical turk. In Proceedings of the 23nd Annual ACM Symposium on User Interface Software and Technology (pp. 57–66). New York, NY: ACM

    Google Scholar 

  • Little, G., Chilton, L. B., Goldman, M., & Miller, R. C. (2010b). Exploring iterative and parallel human computation processes. In Proceedings of the ACM SIGKDD Workshop on Human Computation (pp. 68–76). New York, NY: ACM

    Google Scholar 

  • Noronha, J., Hysen, E., Zhang, H., & Gajos, K. Z. (2011). Platemate: Crowdsourcing nutritional analysis from food photographs. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology (pp. 1–12). New York, NY: ACM

    Google Scholar 

  • Priedhorsky, R., & Terveen, L. (2008). The computational geowiki: What, why, and how. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work (pp. 267–276). New York, NY: ACM

    Google Scholar 

  • Quinn, A. J., & Bederson, B. B. (2011). Human computation: A survey and taxonomy of a growing field. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems (pp. 1403–1412). New York, NY: ACM

    Google Scholar 

  • Ross, J., Irani, L., Silberman, M., Zaldivar, A., & Tomlinson, B. (2010). Who are the crowdworkers?: Shifting demographics in mechanical turk. In Proceedings of the 28th International Conference (Extended Abstracts) on Human Factors in Computing Systems (pp. 2863–2872). New York, NY: ACM

    Google Scholar 

  • Stuart, H. C., Dabbish, L., Kiesler, S., Kinnaird, P., & Kang, R. (2012). Social transparency in networked information exchange: A theoretical framework. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (pp. 451–460). New York, NY: ACM

    Google Scholar 

  • Toomim, M., Kriplean, T., Pörtner, C., & Landay, J. (2011). Utility of human-computer interactions: Toward a science of preference measurement. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems (pp. 2275–2284). New York, NY: ACM

    Google Scholar 

  • Tuite, K., Snavely, N., Hsiao, D. -Y., Smith, A. M., & Popović, Z. (2010). Reconstructing the world in 3D: Bringing games with a purpose outdoors. In Proceedings of the Fifth International Conference on the Foundations of Digital Games (pp. 232–239). New York, NY: ACM

    Google Scholar 

  • von Ahn, L., & Dabbish, L. (2004). Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 319–326). New York: ACM

    Google Scholar 

  • von Ahn, L., Maurer, B., McMillen, C., Abraham, D., & Blum, M. (2008). reCAPTCHA: Human-based character recognition via web security measures. Science, 321(5895), 1465–1468.

    Article  MATH  MathSciNet  Google Scholar 

  • Zhang, H., Law, E., Miller, R., Gajos, K., Parkes, D., & Horvitz, E. (2012). Human computation tasks with global constraints. In Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems (pp. 217–226). New York, NY: ACM

    Google Scholar 

  • Zimmerman, J., Tomasic, A., Garrod, C., Yoo, D., Hiruncharoenvate, C., Aziz, R., et al. (2011). Field trial of Tiramisu: Crowd-sourcing bus arrival times to spur co-design. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems (pp. 1677–1686). New York, NY: ACM

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Serge Egelman .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Egelman, S., Chi, E.H., Dow, S. (2014). Crowdsourcing in HCI Research. In: Olson, J., Kellogg, W. (eds) Ways of Knowing in HCI. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-0378-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-1-4939-0378-8_11

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4939-0377-1

  • Online ISBN: 978-1-4939-0378-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics