skip to main content
10.1145/3230654.3230657acmconferencesArticle/Chapter ViewAbstractPublication PagescommConference Proceedingsconference-collections
research-article

A Winners-Take-All Incentive Mechanism for Crowd-Powered Systems

Published: 18 June 2018 Publication History

Abstract

This paper studies incentive mechanisms for crowd-powered systems, including applications such as collection of personal data for big-data analytics and crowdsourcing. In big-data analytics using personal data, an individual may control the quality of reported data via a privacy-preserving mechanism that randomizes the answer. In crowdsourcing, the quality of the reported answer depends on the amount of effort spent by a worker or a team. In these applications, incentive mechanisms are critical for eliciting data/answers with target quality. This paper focuses the following two fundamental questions: what is the minimum payment required to incentivize an individual to submit a data/answer with quality level ∈? and what incentive mechanisms can achieve the minimum payment?
Let ∈i denote the quality of the data/answer reported by individual i. In this paper, we first derive a lower bound on the minimum amount of payment required for guaranteeing quality level ∈i. Inspired by the lower bound, we propose an incentive mechanism, named Winners-Take-All (WINTALL). WINTALL first decides a winning answer based on the reported data, cost functions of individuals, and some prior distribution; and then pays to individuals whose reported data match the winning answer. Under some assumptions, we show that the expected payment of WINTALL matches the lower bound. In the application of private discrete distribution estimation, we show that WINTALL simply rewards individuals whose reported answers match the most popular answer from the reported ones (the prior distribution is not needed in this case).

References

[1]
https://images.apple.com/privacy/docs/Differential_Privacy_Overview.pdf.
[2]
Cai, Y., Daskalakis, C., and Papadimitriou, C. Optimum statistical estimation with strategic data sources. In Proc. Conf. Learning Theory (COLT) (2015), pp. 280--296.
[3]
Cummings, R., Ligett, K., Roth, A., Wu, Z. S., and Ziani, J. Accuracy for sale: Aggregating data with a variance constraint. In Proc. Conf. Innovations in Theoretical Computer Science (2015), pp. 317--324.
[4]
Dasgupta, A., and Ghosh, A. Crowdsourced judgement elicitation with endogenous proficiency. In Proc. Int. Conf. World Wide Web (WWW) (2013), pp. 319--330.
[5]
Erlingsson, Ú., Pihur, V., and Korolova, A. RAPPOR: Randomized aggregatable privacy-preserving ordinal response. In Proc. ACM Conf. Computer and Communications Security (CCS) (Scottsdale, AZ, 2014), pp. 1054--1067.
[6]
Kairouz, P., Bonawitz, K., and Ramage, D. Discrete distribution estimation under local privacy. In Int. Conf. Machine Learning (ICML) (2016), pp. 2436--2444.
[7]
Khetan, A., and Oh, S. Achieving budget-optimality with adaptive schemes in crowdsourcing. In Advances Neural Information Processing Systems (NIPS) (2016), pp. 4844--4852.
[8]
Liu, y., and Liu, M. An online learning approach to improving the quality of crowdsourcing. In Proc. Ann. ACM SIGMETRICS Conf. (New York, NY, USA, 2015), pp. 217--230.
[9]
Miller, N., Resnick, P., and Zeckhauser, R. Eliciting informative feedback: The peer-prediction method. In Computing with Social Trust, Human--Computer Interaction Series. Springer London, 2009, pp. 185--212.
[10]
Prelec, D. A bayesian truth serum for subjective data. Science 306, 5695 (2004), 462--466.
[11]
Radanovic, G., and faltings, B. A robust bayesian truth serum for non-binary signals. In AAAI Conf. Artificial Intelligence (2013), pp. 833--839.
[12]
Radanovic, G., and Faltings, B. Incentives for truthful information elicitation of continuous signals. In AAAI Conf. Artificial Intelligence (2014), pp. 770--776.
[13]
Radanovic, G., and Faltings, B. Incentive schemes for participatory sensing. In Proc. Int. Conf. Autonomous Agents and Multiagent Systems (2015), International Foundation for Autonomous Agents and Multiagent Systems, pp. 1081--1089.
[14]
Shah, N., and Zhou, D. No oops, you won't do it again: Mechanisms for self-correction in crowdsourcing. In International Conference on Machine Learning (2016), pp. 1--10.
[15]
Shah, N., Zhou, D., and Peres, Y. Approval voting and incentives in crowdsourcing. In International Conference on Machine Learning (2015), pp. 10--19.
[16]
Shah, N. B., and Zhou, D. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In Advances in neural information processing systems (2015), pp. 1--9.
[17]
Shnayder, V., Agarwal, A., Frongillo, R., and Parkes, D. C. Informed truthfulness in multi-task peer prediction. In Proceedings of the 2016 ACM Conference on Economics and Computation (2016), pp. 179--196.
[18]
Von Ahn, L., and Dabbish, L. Labeling images with a computer game. In Proc. the SIGCHI conf. Human factors in computing systems (2004), pp. 319--326.
[19]
Wang, W., Ying, L., and Zhang, J. A game-theoretic approach to quality control for collecting privacy-preserving data. In Proc. Annu. Allerton Conf. Communication, Control and Computing (Monticello, IL, Sept. 2015).
[20]
Wang, W., Ying, L., and Zhang, J. The value of privacy: Strategic data subjects, incentive mechanisms and fundamental limits. In Proc. Ann. ACM SIGMETRICS Conf. (Antibes Juan-les-Pins, France, June 2016).
[21]
Wauthier, F. L., and Jordan, M. I. Bayesian bias mitigation for crowdsourcing. In Advances Neural Information Processing Systems (NIPS) (2011), pp. 1800--1808.
[22]
Witkowski, J., and Parkes, D. C. A robust bayesian truth serum for small populations. In AAAI Conf. Artificial Intelligence (2012), vol. 12, pp. 1492--1498.

Cited By

View all
  • (2020)Crowd Teaching with Imperfect LabelsProceedings of The Web Conference 202010.1145/3366423.3380099(110-121)Online publication date: 20-Apr-2020
  • (2019)Multi-task Crowdsourcing via an Optimization FrameworkACM Transactions on Knowledge Discovery from Data10.1145/331022713:3(1-26)Online publication date: 29-May-2019
  • (2019)Optimizing the Wisdom of the CrowdProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining10.1145/3292500.3332277(3231-3232)Online publication date: 25-Jul-2019

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
NetEcon '18: Proceedings of the 13th Workshop on Economics of Networks, Systems and Computation
June 2018
35 pages
ISBN:9781450359160
DOI:10.1145/3230654
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 June 2018

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SIGMETRICS '18
Sponsor:

Acceptance Rates

NetEcon '18 Paper Acceptance Rate 10 of 18 submissions, 56%;
Overall Acceptance Rate 10 of 18 submissions, 56%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1
  • Downloads (Last 6 weeks)0
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2020)Crowd Teaching with Imperfect LabelsProceedings of The Web Conference 202010.1145/3366423.3380099(110-121)Online publication date: 20-Apr-2020
  • (2019)Multi-task Crowdsourcing via an Optimization FrameworkACM Transactions on Knowledge Discovery from Data10.1145/331022713:3(1-26)Online publication date: 29-May-2019
  • (2019)Optimizing the Wisdom of the CrowdProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining10.1145/3292500.3332277(3231-3232)Online publication date: 25-Jul-2019

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media