Skip to main content

Evaluation in the Crowd: An Introduction

  • Conference paper
  • First Online:
Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10264))

Abstract

Human-centred empirical evaluations play important roles in the fields of human-computer interaction, visualisation, and graphics. The advent of crowdsourcing platforms, such as Amazon Mechanical Turk, has provided a revolutionary methodology to conduct human-centred experiments. Through such platforms, experiments can now collect data from hundreds, even thousands, of participants from a diverse user community over a matter of weeks, greatly increasing the ease with which we can collect data as well as the power and generalisability of experimental results. However, such an experimental platform does not come without its problems: ensuring participant investment in the task, defining experimental controls, and understanding the ethics behind deploying such experiments en masse. This book is intended to be a primer for computer science researchers who intend to use crowdsourcing technology for human centred experiments. It focuses on methodological considerations when using crowdsourcing platforms to run human-centred experiments, particularly in the areas of visualisation and of quality of experience (QoE) for online video delivery. We hope that this book can act as a primer to researchers in our fields that intend to run experiments on crowdsourcing for the purposes of human-centred experimentation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Archambault .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Archambault, D., Purchase, H.C., Hoßfeld, T. (2017). Evaluation in the Crowd: An Introduction. In: Archambault, D., Purchase, H., Hoßfeld, T. (eds) Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments. Lecture Notes in Computer Science(), vol 10264. Springer, Cham. https://doi.org/10.1007/978-3-319-66435-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-66435-4_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-66434-7

  • Online ISBN: 978-3-319-66435-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics