Skip to main content

What Prize Is Right? How to Learn the Optimal Structure for Crowdsourcing Contests

  • Conference paper
  • First Online:
PRICAI 2019: Trends in Artificial Intelligence (PRICAI 2019)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11670))

Included in the following conference series:

  • 2288 Accesses

Abstract

In crowdsourcing, one effective method for encouraging par-ticipants to perform tasks is to run contests where participants compete against each other for rewards. However, there are numerous ways to implement such contests in specific projects. They could vary in their structure (e.g., performance evaluation and the number of prizes) and parameters (e.g., the maximum number of participants and the amount of prize money). Additionally, with a given budget and a time limit, choosing incentives (i.e., contest structures with specific parameter values) that maximise the overall utility is not trivial, as their respective effectiveness in a specific project is usually unknown a priori. Thus, in this paper, we propose a novel algorithm, BOIS (Bayesian-optimisation-based incentive selection), to learn the optimal structure and tune its parameters effectively. In detail, the learning and tuning problems are solved simultaneously by using online learning in combination with Bayesian optimisation. The results of our extensive simulations show that the performance of our algorithm is up to 85% of the optimal and up to 63% better than state-of-the-art benchmarks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We use the term “contest” in a broad sense to refer to any situation in which participants exert effort to submit tasks for prizes, which are provided based on relative performance. The prizes can be tangible rewards, points, or positions on a leaderboard. Thus, all-pay auctions, lotteries, and leaderboards are considered as contests for the purpose of this paper.

  2. 2.

    Although the incentives focused on in this paper relate to contests, the problem stated and the algorithms discussed can be used with any other types of incentive in the literature, such as pay for performance or bonuses. Thus, to keep the problem general, we use the term “incentives” instead of “contest structures”.

  3. 3.

    The measurement of an incentive’s effectiveness will be discussed in Subsect. 3.1.

  4. 4.

    This ratio is called “density” in Tran-Thanh et al. (2010).

  5. 5.

    See Snoek et al. (2012) for more information about the method.

References

  • Araujo, R.M.: 99designs: an analysis of creative competition in crowdsourced design. In: HCOMP, pp. 17–24 (2013)

    Google Scholar 

  • Badanidiyuru, A., Kleinberg, R., Slivkins, A.: Bandits with knapsacks. JACM 65(3), 1–55 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  • Bubeck, S., Stoltz, G., Szepesvári, C., Munos, R.: X-armed bandits. JMLR 12, 1655–1695 (2011)

    MathSciNet  MATH  Google Scholar 

  • Cavallo, R., Jain, S.: Efficient crowdsourcing contests. In: AAMAS, vol. 2, pp. 677–686 (2012)

    Google Scholar 

  • Doan, A., Ramakrishnan, R., Halevy, A.Y.: Crowdsourcing systems on the world-wide web. CACM 54(4), 86–96 (2011)

    Article  Google Scholar 

  • Frey, B.S., Jegen, R.: Motivation crowding theory. J. Econ. Surv. 15(5), 589–611 (2001)

    Article  Google Scholar 

  • Ghezzi, A., Gabelloni, D., Martini, A., Natalicchio, A.: Crowdsourcing: a review and suggestions for future research. IJMR 20(2), 343–363 (2018)

    Google Scholar 

  • Ho, C.J., Slivkins, A., Vaughan, J.W.: Adaptive contract design for crowdsourcing markets: bandit algorithms for repeated principal-agent problems. JAIR 55, 317–359 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  • Johnson, M., Moore, L., Ylvisaker, D.: Minimax and maximin distance designs. JSPI 26(2), 131–148 (1990)

    MathSciNet  Google Scholar 

  • Li, H., Xia, Y.: Infinitely many-armed bandits with budget constraints. In: AAAI, pp. 2182–2188 (2017)

    Google Scholar 

  • Luo, T., Kanhere, S.S., Tan, H.P., Wu, F., Wu, H.: Crowdsourcing with tullock contests: a new perspective. In: INFOCOM, pp. 2515–2523 (2015)

    Google Scholar 

  • Mason, W., Watts, D.J.: Financial incentives and the “performance of crowds”. ACM SigKDD Explor. Newsl. 11(2), 100–108 (2010)

    Article  Google Scholar 

  • Moldovanu, B., Sela, A.: The optimal allocation of prizes in contests. AER 91(3), 542–558 (2001)

    Article  Google Scholar 

  • Rogstadius, J., Kostakos, V., Kittur, A., Smus, B., Laredo, J., Vukovic, M.: An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In: ICWSM, pp. 321–328 (2011)

    Google Scholar 

  • Simula, H.: The rise and fall of crowdsourcing? In: HICSS, pp. 2783–2791 (2013)

    Google Scholar 

  • Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: NIPS, p. 9 (2012)

    Google Scholar 

  • Tran-Thanh, L., Chapman, A., De Cote, E.M., Rogers, A., Jennings, N.R.: Epsilon-first policies for budget-limited multi-armed bandits. In: AAAI, pp. 1211–1216 (2010)

    Google Scholar 

  • Trovo, F., Paladino, S., Restelli, M., Gatti, N.: Budgeted multi-armed bandit in continuous action space. In: ECAI, pp. 560–568 (2016)

    Google Scholar 

  • Truong, N.V.Q., Stein, S., Tran-Thanh, L., Jennings, N.R.: Adaptive incentive selection for crowdsourcing contests. In: AAMAS, pp. 2100–2102 (2018)

    Google Scholar 

  • Yin, M., Chen, Y.: Bonus or not? Learn to reward in crowdsourcing. In: IJCAI, pp. 201–207 (2015)

    Google Scholar 

Download references

Acknowledgments

This research was sponsored by the U.S. Army Research Laboratory and the U.K. Ministry of Defence under Agreement Number W911NF-16-3-0001. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army Research Laboratory, the U.S. Government, the U.K. Ministry of Defence or the U.K. Government. The U.S. and U.K. Governments are authorised to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nhat Van-Quoc Truong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Truong, N.VQ., Stein, S., Tran-Thanh, L., Jennings, N.R. (2019). What Prize Is Right? How to Learn the Optimal Structure for Crowdsourcing Contests. In: Nayak, A., Sharma, A. (eds) PRICAI 2019: Trends in Artificial Intelligence. PRICAI 2019. Lecture Notes in Computer Science(), vol 11670. Springer, Cham. https://doi.org/10.1007/978-3-030-29908-8_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-29908-8_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-29907-1

  • Online ISBN: 978-3-030-29908-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics