skip to main content
10.1145/3523227.3546766acmotherconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
research-article

Identifying New Podcasts with High General Appeal Using a Pure Exploration Infinitely-Armed Bandit Strategy

Published: 13 September 2022 Publication History

Abstract

Podcasting is an increasingly popular medium for entertainment and discourse around the world, with tens of thousands of new podcasts released on a monthly basis. We consider the problem of identifying from these newly-released podcasts those with the largest potential audiences so they can be considered for personalized recommendation to users. We first study and then discard a supervised approach due to the inadequacy of either content or consumption features for this task, and instead propose a novel non-contextual bandit algorithm in the fixed-budget infinitely-armed pure-exploration setting. We demonstrate that our algorithm is well-suited to the best-arm identification task for a broad class of arm reservoir distributions, out-competing a large number of state-of-the-art algorithms. We then apply the algorithm to identifying podcasts with broad appeal in a simulated study, and show that it efficiently sorts podcasts into groups by increasing appeal while avoiding the popularity bias inherent in supervised approaches.

Supplementary Material

MP4 File (Identifying New Podcasts with High General Appeal Using a Pure Exploration Infinitely-Armed Bandit Strategy.mp4)
Presentation video

References

[1]
Jean-Yves Audibert and Sébastien Bubeck. 2010. Best arm identification in multi-armed bandits. In COLT-23th Conference on Learning Theory-2010. 13–p.
[2]
Maryam Aziz, Jesse Anderton, Emilie Kaufmann, and Javed Aslam. 2018. Pure Exploration in Infinitely-Armed Bandit Models with Fixed-Confidence. In ALT 2018-Algorithmic Learning Theory.
[3]
Mathias Bärtl. 2018. YouTube channels, uploads and views: A statistical analysis of the past 10 years. Convergence 24, 1 (2018), 16–32.
[4]
Donald A. Berry, Robert W. Chen, Alan Zame, David C. Heath, and Larry A. Shepp. 1997. Bandit problems with infinitely many arms. The Annals of Statistics 25, 5 (10 1997), 2103–2116.
[5]
Thomas Bonald and Alexandre Proutiere. 2013. Two-target algorithms for infinite-armed bandits with Bernoulli rewards. In Advances in Neural Information Processing Systems. 2184–2192.
[6]
Sébastien Bubeck, Nicolo Cesa-Bianchi, 2012. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends® in Machine Learning 5, 1(2012), 1–122.
[7]
Sébastien Bubeck, Rémi Munos, and Gilles Stoltz. 2009. Pure exploration in multi-armed bandits problems. In International conference on Algorithmic learning theory. Springer, 23–37.
[8]
Giuseppe Burtini, Jason Loeppky, and Ramon Lawrence. 2015. A survey of online experiment design with the stochastic multi-armed bandit. arXiv preprint arXiv:1510.00757(2015).
[9]
Alexandra Carpentier and Michal Valko. 2015. Simple regret for infinitely many armed bandits. CoRR abs/1505.04627(2015).
[10]
Hock Peng Chan and Shouri Hu. 2018. Infinite Arms Bandit: Optimality via Confidence Bounds. CoRR abs/1805.11793(2018).
[11]
Karthekeyan Chandrasekaran and Richard Karp. 2014. Finding a most biased coin with fewest flips. In Conference on Learning Theory. 394–407.
[12]
Arghya Roy Chaudhuri and Shivaram Kalyanakrishnan. 2017. PAC Identification of a Bandit Arm Relative to a Reward Quantile. In AAAI. 1777–1783.
[13]
Arghya Roy Chaudhuri and Shivaram Kalyanakrishnan. 2018. Quantile-Regret Minimisation in Infinitely Many-Armed Bandits. In UAI. 425–434.
[14]
Yahel David and Nahum Shimkin. 2014. Infinitely many-armed bandits with unknown value distribution. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 307–322.
[15]
Yahel David and Nahum Shimkin. 2015. Refined algorithms for infinitely many-armed bandits with deterministic rewards. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 464–479.
[16]
Flavio Figueiredo, Fabrício Benevenuto, and Jussara M Almeida. 2011. The tube over time: characterizing popularity growth of youtube videos. In Proceedings of the fourth ACM international conference on Web search and data mining. 745–754.
[17]
Rossano Gaeta, Michele Garetto, Giancarlo Ruffo, and Alessandro Flammini. 2022. Reconciling the Quality vs Popularity Dichotomy in Online Cultural Markets. ACM Transactions on Information Systems (TOIS) (2022).
[18]
F Maxwell Harper and Joseph A Konstan. 2015. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis) 5, 4(2015), 1–19.
[19]
Kevin G Jamieson, Daniel Haas, and Benjamin Recht. 2016. The power of adaptivity in identifying statistical alternatives. In Advances in Neural Information Processing Systems. 775–783.
[20]
Kevin G Jamieson, Lalit Jain, Chris Fernandez, Nicholas J Glattard, and Rob Nowak. 2015. Next: A system for real-world development, evaluation, and application of active learning. Advances in neural information processing systems 28 (2015).
[21]
Kevin G. Jamieson, Matthew Malloy, Robert D. Nowak, and Sébastien Bubeck. 2014. lil’ UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits. In COLT.
[22]
Zohar Karnin, Tomer Koren, and Oren Somekh. 2013. Almost Optimal Exploration in Multi-Armed Bandits. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), Vol. 28. 1238–1246.
[23]
Lisha Li, Kevin G Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. 2017. Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization.Journal of Machine Learning Research 18 (2017), 185–1.
[24]
Larkin Liu, Richard Downe, and Joshua Reid. 2019. Multi-Armed Bandit Strategies for Non-Stationary Reward Distributions and Delayed Feedback Processes. arXiv e-prints, Article arXiv:1902.08593 (Feb. 2019), arXiv:1902.08593 pages. arxiv:1902.08593 [cs.LG]
[25]
Ilias N Lymperopoulos. 2016. Predicting the popularity growth of online content: Model and algorithm. Information Sciences 369(2016), 585–613.
[26]
Matthew L Malloy, Gongguo Tang, and Robert D Nowak. 2012. Quickest search for a rare distribution. In Information Sciences and Systems (CISS), 2012 46th Annual Conference on. IEEE, 1–6.
[27]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12 (2011), 2825–2830.
[28]
Sebastian Pilarski, Slawomir Pilarski, and Dániel Varró. 2021. Delayed reward bernoulli bandits: Optimal policy and predictive meta-algorithm pardi. IEEE Transactions on Artificial Intelligence 3, 2(2021), 152–163.
[29]
Henrique Pinto, Jussara M Almeida, and Marcos A Gonçalves. 2013. Using early view patterns to predict the popularity of youtube videos. In Proceedings of the sixth ACM international conference on Web search and data mining. 365–374.
[30]
Wenbo Ren, Jia Liu, and Ness Shroff. 2018. Exploring k out of Top ρ Fraction of Arms in Stochastic Bandits. arXiv preprint arXiv:1810.11857(2018).
[31]
Matthew J Salganik, Peter Sheridan Dodds, and Duncan J Watts. 2006. Experimental study of inequality and unpredictability in an artificial cultural market. science 311, 5762 (2006), 854–856.
[32]
Gabor Szabo and Bernardo A Huberman. 2010. Predicting the popularity of online content. Commun. ACM 53, 8 (2010), 80–88.
[33]
Olivier Teytaud, Sylvain Gelly, and Michele Sebag. 2007. Anytime many-armed bandits. In CAP07.
[34]
Yizao Wang, Jean-Yves Audibert, and Rémi Munos. 2008. Algorithms for infinitely many-armed bandits. Advances in Neural Information Processing Systems 21 (2008).

Cited By

View all
  • (2024)Investigating Characteristics of Media Recommendation Solicitation in r/ifyoulikeblankProceedings of the ACM on Human-Computer Interaction10.1145/36870418:CSCW2(1-23)Online publication date: 8-Nov-2024
  • (2024)Fairness and Transparency in Music Recommender Systems: Improvements for ArtistsProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688024(1368-1375)Online publication date: 8-Oct-2024
  • (2024)Unbiased Identification of Broadly Appealing Content Using a Pure Exploration Infinitely Armed Bandit StrategyACM Transactions on Recommender Systems10.1145/36263243:1(1-22)Online publication date: 2-Aug-2024
  • Show More Cited By

Index Terms

  1. Identifying New Podcasts with High General Appeal Using a Pure Exploration Infinitely-Armed Bandit Strategy
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        RecSys '22: Proceedings of the 16th ACM Conference on Recommender Systems
        September 2022
        743 pages
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 13 September 2022

        Permissions

        Request permissions for this article.

        Check for updates

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Conference

        Acceptance Rates

        Overall Acceptance Rate 254 of 1,295 submissions, 20%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)60
        • Downloads (Last 6 weeks)7
        Reflects downloads up to 05 Mar 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Investigating Characteristics of Media Recommendation Solicitation in r/ifyoulikeblankProceedings of the ACM on Human-Computer Interaction10.1145/36870418:CSCW2(1-23)Online publication date: 8-Nov-2024
        • (2024)Fairness and Transparency in Music Recommender Systems: Improvements for ArtistsProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688024(1368-1375)Online publication date: 8-Oct-2024
        • (2024)Unbiased Identification of Broadly Appealing Content Using a Pure Exploration Infinitely Armed Bandit StrategyACM Transactions on Recommender Systems10.1145/36263243:1(1-22)Online publication date: 2-Aug-2024
        • (2024)Structural Podcast Content Modeling with GeneralizabilityCompanion Proceedings of the ACM Web Conference 202410.1145/3589335.3651563(710-713)Online publication date: 13-May-2024
        • (2023)Revisiting simple regret: fast rates for returning a good armProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3620180(42110-42158)Online publication date: 23-Jul-2023
        • (2023)Accelerating Creator Audience Building through Centralized ExplorationProceedings of the 17th ACM Conference on Recommender Systems10.1145/3604915.3608880(70-73)Online publication date: 14-Sep-2023
        • (2023)Impatient Bandits: Optimizing Recommendations for the Long-Term Without DelayProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3580305.3599386(1687-1697)Online publication date: 6-Aug-2023

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media