skip to main content
10.1145/3027063.3053335acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

Comparing the Reliability of Amazon Mechanical Turk and Survey Monkey to Traditional Market Research Surveys

Published:06 May 2017Publication History

ABSTRACT

In the product design process, it is often desirable to quickly obtain information about current user behaviors for topics that cannot be obtained through existing data or instrumentation. Perhaps we would like to understand the use of products we do not have access to or perhaps the action we would like to know about (such as using a coupon) is an action taken outside of a system that can be instrumented. Traditionally, large market research surveys would be conducted to answer these questions, but often designers need answers much faster. We present a study investigating the reliability of fast survey platforms such as Amazon Mechanical Turk and Survey Monkey as compared to larger market research studies for technology behavior research and show that results can be obtained in hours for much smaller costs with accuracy within 10% of traditional larger surveys. This demonstrates that we can rely more heavily on these platforms in the product design process and provide much faster planning iterations that are informed by actual usage data.

References

  1. Frank R. Bentley, S. Tejaswi Peesapati, and Karen Church. 2016. "I thought she would like to read it": Exploring Sharing Behaviors in the Context of Declining Mobile Web Use. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 1893--1903. DOI: http://dx.doi.org/10.1145/2858036.2858056 Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Adam J. Berinsky, Gregory A. Huber, and Gabriel S. Lenz. Evaluating online labor markets for experimental research: Amazon. com's Mechanical Turk. Political Analysis 20.3 (2012): 351--368. Google ScholarGoogle ScholarCross RefCross Ref
  3. Michael Buhrmester, Tracy Kwang, and Samuel D. Gosling. Amazon's Mechanical Turk a new source of inexpensive, yet high-quality, data?. Perspectives on psychological science 6.1 (2011): 3--5. Google ScholarGoogle ScholarCross RefCross Ref
  4. Joseph K. Goodman, Cynthia E. Cryder, and Amar Cheema. Data collection in a flat world: The strengths and weaknesses of Mechanical Turk samples. Journal of Behavioral Decision Making 26.3 (2013): 213--224. Google ScholarGoogle ScholarCross RefCross Ref
  5. Horton, J.J., Rand, D.G. & Zeckhauser, R.J. Exp Econ (2011) 14: 399. Google ScholarGoogle ScholarCross RefCross Ref
  6. Winter Mason and Siddharth Suri. "Conducting behavioral research on Amazon's Mechanical Turk." Behavior research methods 44.1 (2012): 123.Google ScholarGoogle ScholarCross RefCross Ref
  7. Amos Tversky, and Daniel Kahneman. The framing of decisions and the psychology of choice. Environmental Impact Assessment, Technology Assessment, and Risk Analysis. Springer Berlin Heidelberg, 1985. 107--129.Google ScholarGoogle Scholar
  8. Amos Tversky, and Daniel Kahneman. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological review 90.4 (1983): 293.Google ScholarGoogle Scholar
  9. Gabriele Paolacci, Jesse Chandler, and Panagiotis G. Ipeirotis. Running experiments on amazon mechanical turk. Judgment and Decision making 5.5 (2010): 411--419.Google ScholarGoogle Scholar
  10. Neil Stewart, Christoph Ungemach, Adam JL Harris, Daniel M. Bartels, Ben R. Newell, Gabriele Paolacci, and Jesse Chandler. The average laboratory samples a population of 7,300 Amazon Mechanical Turk workers. Judgment and Decision Making 10, no. 5 (2015): 479.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Comparing the Reliability of Amazon Mechanical Turk and Survey Monkey to Traditional Market Research Surveys

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI EA '17: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems
      May 2017
      3954 pages
      ISBN:9781450346566
      DOI:10.1145/3027063

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 6 May 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CHI EA '17 Paper Acceptance Rate1,000of5,000submissions,20%Overall Acceptance Rate6,164of23,696submissions,26%

      Upcoming Conference

      CHI '24
      CHI Conference on Human Factors in Computing Systems
      May 11 - 16, 2024
      Honolulu , HI , USA

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader