skip to main content
research-article
Public Access

Eliciting Confidence for Improving Crowdsourced Audio Annotations

Published: 07 April 2022 Publication History

Abstract

In this work we explore confidence elicitation methods for crowdsourcing "soft" labels, e.g., probability estimates, to reduce the annotation costs for domains with ambiguous data. Machine learning research has shown that such "soft" labels are more informative and can reduce the data requirements when training supervised machine learning models. By reducing the number of required labels, we can reduce the costs of slow annotation processes such as audio annotation. In our experiments we evaluated three confidence elicitation methods: 1) "No Confidence" elicitation, 2) "Simple Confidence" elicitation, and 3) "Betting" mechanism for confidence elicitation, at both individual (i.e., per participant) and aggregate (i.e., crowd) levels. In addition, we evaluated the interaction between confidence elicitation methods, annotation types (binary, probability, and z-score derived probability), and "soft" versus "hard" (i.e., binarized) aggregate labels. Our results show that both confidence elicitation mechanisms result in higher annotation quality than the "No Confidence" mechanism for binary annotations at both participant and recording levels. In addition, when aggregating labels at the recording level, results indicate that we can achieve comparable results to those with 10-participant aggregate annotations using fewer annotators if we aggregate "soft" labels instead of "hard" labels. These results suggest that for binary audio annotation using a confidence elicitation mechanism and aggregating continuous labels we can obtain higher annotation quality, more informative labels, with quality differences more pronounced with fewer participants. Finally, we propose a way of integrating these confidence elicitation methods into a two-stage, multi-label annotation pipeline.

References

[1]
Vamshi Ambati, Stephan Vogel, and Jaime G Carbonell. 2010. Active Learning and Crowd-Sourcing for Machine Translation. In LREC, Vol. 1. 2.
[2]
Lora Aroyo and Chris Welty. 2015. Truth is a lie: Crowd truth and the seven myths of human annotation. AI Magazine, Vol. 36, 1 (2015), 15--24.
[3]
Yigal Attali and Don Powers. 2010. Immediate feedback and opportunity to revise answers to open-ended questions. Educational and Psychological Measurement, Vol. 70, 1 (2010), 22--35.
[4]
Yigal Attali and Fabienne van der Kleij. 2017. Effects of feedback elaboration and feedback timing during computer-based practice in mathematics problem solving. Computers & Education, Vol. 110 (2017), 154--169.
[5]
Alexandry Augustin, Matteo Venanzi, J Hare, A Rogers, and NR Jennings. 2017. Bayesian aggregation of categorical distributions with applications in crowdsourcing. AAAI Press/International Joint Conferences on Artificial Intelligence.
[6]
A. M. Aung and J. Whitehill. 2018. Harnessing Label Uncertainty to Improve Modeling: An Application to Student Engagement Recognition. In 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018). 166--170.
[7]
Yoram Bachrach, Tom Minka, John Guiver, and Thore Graepel. 2012. How to Grade a Test without Knowing the Answers: A Bayesian Graphical Model for Adaptive Crowdsourcing and Aptitude Testing (ICML'12). Omnipress, Madison, WI, USA, 819--826.
[8]
Juan Pablo Bello, Clá udio T. Silva, Oded Nov, R. Luke DuBois, Anish Arora, Justin Salamon, Charles Mydlarz, and Harish Doraiswamy. 2018. SONYC: A System for the Monitoring, Analysis and Mitigation of Urban Noise Pollution. CoRR, Vol. abs/1805.00889 (2018). arxiv: 1805.00889 http://arxiv.org/abs/1805.00889
[9]
Albert S Bregman. 1994. Auditory scene analysis: The perceptual organization of sound .MIT press.
[10]
Mark Cartwright, Jason Cramer, Ana Elisa Mendez Mendez, Yu Wang, Ho-Hsiang Wu, Vincent Lostanlen, Magdalena Fuentes, Graham Dove, Charlie Mydlarz, Justin Salamon, et al. 2020. SONYC-UST-V2: An Urban Sound Tagging Dataset with Spatiotemporal Context. arXiv preprint arXiv:2009.05188 (2020).
[11]
Mark Cartwright, Graham Dove, Ana Elisa Méndez Méndez, Juan P. Bello, and Oded Nov. 2019. Crowdsourcing Multi-Label Audio Annotation Tasks with Citizen Scientists (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--11. https://doi.org/10.1145/3290605.3300522
[12]
Mark Cartwright, Ayanna Seals, Justin Salamon, Alex Williams, Stefanie Mikloska, Duncan MacConnell, Edith Law, Juan P Bello, and Oded Nov. 2017. Seeing Sound: Investigating theEffects of Visualizations and Complexity on Crowdsourced Audio Annotations. Proceedings of the ACM on Human-Computer Interaction, Vol. 1, CSCW (2017), 29.
[13]
Joseph Chee Chang, Saleema Amershi, and Ece Kamar. 2017. Revolt: Collaborative crowdsourcing for labeling machine learning datasets. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2334--2346.
[14]
Quanze Chen, Jonathan Bragg, Lydia B. Chilton, and Dan S. Weld. 2019. Cicero: Multi-Turn, Contextual Argumentation for Accurate Crowdsourcing (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3290605.3300761
[15]
Lydia B Chilton, Greg Little, Darren Edge, Daniel S Weld, and James A Landay. 2013. Cascade: Crowdsourcing taxonomy creation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1999--2008.
[16]
Minsuk Choi, Cheonbok Park, Soyoung Yang, Yonggyu Kim, Jaegul Choo, and Sungsoo Ray Hong. 2019. Aila: Attentive interactive labeling assistant for document classification through attention-based deep neural networks. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--12.
[17]
John Joon Young Chung, Jean Y Song, Sindhu Kutty, Sungsoo Hong, Juho Kim, and Walter S Lasecki. 2019. Efficient elicitation approaches to estimate collective crowd answers. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--25.
[18]
Albin Andrew Correya. 2017. Retrieving Ambiguous Sounds Using Perceptual Timbral Attributes in Audio Production Environments. Ph.D. Dissertation. Diploma thesis, 2017 10, 30.
[19]
Joana Costa, Catarina Silva, Má rio Antunes, and Bernardete Ribeiro. 2011. On using crowdsourcing and active learning to improve classification performance. In Intelligent Systems Design and Applications (ISDA), 2011 11th International Conference on. 469--474.
[20]
Nilesh Dalvi, Anirban Dasgupta, Ravi Kumar, and Vibhor Rastogi. 2013. Aggregating crowdsourced binary ratings. In Proceedings of the 22nd international conference on World Wide Web. 285--294.
[21]
Bruno de Finetti. 1992. Foresight: Its Logical Laws, Its Subjective Sources. Springer New York, New York, NY, 134--174. https://doi.org/10.1007/978--1--4612-0919--5_10
[22]
Shuyuan Deng, Atish P Sinha, and Huimin Zhao. 2017. Resolving ambiguity in sentiment classification: The role of dependency features. ACM Transactions on Management Information Systems (TMIS), Vol. 8, 2--3 (2017), 1--13.
[23]
Pranay Dighe, Afsaneh Asaei, and Hervé Bourlard. 2017. Low-Rank and Sparse Soft Targets to Learn Better DNN Acoustics Models. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 5265--5269.
[24]
Anca Dumitrache. 2015. Crowdsourcing Disagreement for Collecting Semantic Annotation. In The Semantic Web. Latest Advances and New Domains, Fabien Gandon, Marta Sabou, Harald Sack, Claudia D'Amato, Philippe Cudré -Mauroux, and Antoine Zimmermann (Eds.). Springer International Publishing, Cham, 701--710.
[25]
Anca Dumitrache, Lora Aroyo, and Chris Welty. 2018a. Capturing ambiguity in crowdsourcing frame disambiguation. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (HCOMP) 2018 .
[26]
Anca Dumitrache, Lora Aroyo, and Chris Welty. 2018b. Crowdsourcing Ground Truth for Medical Relation Extraction. ACM Trans. Interact. Intell. Syst., Vol. 8, 2, Article 11 (July 2018), 20 pages. https://doi.org/10.1145/3152889
[27]
Soufiane El Jelali, Abdelouahid Lyhyaoui, and Aní bal R. Figueiras-Vidal. 2008. Applying emphasized soft targets for Gaussian mixture model based classification. Proceedings of the International Multiconference on Computer Science and Information Technology, IMCSIT 2008, Vol. 3 (2008), 131--136. https://doi.org/10.1109/IMCSIT.2008.4747229
[28]
Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra. 2020. FSD50k: an open dataset of human-labeled sound events. arXiv preprint arXiv:2010.00475 (2020).
[29]
Bin-Bin Gao, Chao Xing, Chen-Wei Xie, Jianxin Wu, and Xin Geng. 2017. Deep label distribution learning with label ambiguity. IEEE Transactions on Image Processing, Vol. 26, 6 (2017), 2825--2838.
[30]
Chao Gao, Yu Lu, and Dengyong Zhou. 2016. Exact exponent in optimal rates for crowdsourcing. In International Conference on Machine Learning. 603--611.
[31]
J. F. Gemmeke, D. P. W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, and M. Ritter. 2017. Audio Set: An Ontology and Human-Labeled Dataset for Audio Events. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 776--780.
[32]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. (2015), 1--9. https://doi.org/10.1063/1.4931082 arxiv: 1503.02531
[33]
Sungsoo Hong, Minhyang Suh, Nathalie Henry Riche, Jooyoung Lee, Juho Kim, and Mark Zachry. 2018. Collaborative dynamic queries: Supporting distributed small group decision-making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1--12.
[34]
Sungsoo Hong, Minhyang Suh, Tae Soo Kim, Irina Smoke, Sangwha Sien, Janet Ng, Mark Zachry, and Juho Kim. 2019. Design for Collaborative Information-Seeking: Understanding User Challenges and Deploying Collaborative Dynamic Queries. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 1--24.
[35]
John J Horton, David G Rand, and Richard J Zeckhauser. 2011. The online laboratory: Conducting experiments in a real labor market. Experimental economics, Vol. 14, 3 (2011), 399--425.
[36]
Eric Humphrey, Simon Durand, and Brian McFee. 2018. OpenMIC-2018: An Open Data-set for Multiple Instrument Recognition. In Proceedings of the 19th International Society for Music Information Retrieval Conference. ISMIR, Paris, France, 438--444. https://doi.org/10.5281/zenodo.1492445
[37]
Youxuan Jiang, Catherine Finegan-Dollak, Jonathan K Kummerfeld, and Walter Lasecki. 2018. Effective crowdsourcing for a new type of summarization task. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). 628--633.
[38]
Youxuan Jiang, Jonathan K. Kummerfeld, and Walter S. Lasecki. 2017. Understanding Task Design Trade-offs in Crowdsourced Paraphrase Collection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, Vancouver, Canada, 103--109. https://doi.org/10.18653/v1/P17--2017
[39]
David Jurgens. 2013. Embracing ambiguity: A comparison of annotation methodologies for crowdsourcing word sense labels. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 556--562.
[40]
Aniket Kittur, Ed H Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI conference on human factors in computing systems. 453--456.
[41]
Matth"aus Kleindessner and Pranjal Awasthi. 2018. Crowdsourcing with arbitrary adversaries. In International Conference on Machine Learning. 2708--2717.
[42]
Edith Law and Luis von Ahn. 2011. Human computation. Synthesis Lectures on Artificial Intelligence and Machine Learning, Vol. 5, 3 (2011), 1--121.
[43]
Edith Law and Luis Von Ahn. 2011. Human Computation. Morgan & Claypool, San Rafael, CA.
[44]
Maë l Lebreton, Shari Langdon, Matthijs J. Slieker, Jip S. Nooitgedacht, Anna E. Goudriaan, Damiaan Denys, Ruth J. van Holst, and Judy Luigjes. 2018. Two Sides of the Same Coin: Monetary Incentives Concurrently Improve and Bias Confidence Judgments. Science Advances, Vol. 4, 5 (2018). https://doi.org/10.1126/sciadv.aaq0668
[45]
Sang Won Lee, Rebecca Krosnick, Sun Young Park, Brandon Keelean, Sach Vaidya, Stephanie D O'Keefe, and Walter S Lasecki. 2018. Exploring real-time collaboration in crowd-powered systems through a ui design tool. Proceedings of the ACM on Human-Computer Interaction, Vol. 2, CSCW (2018), 1--23.
[46]
Qiang Liu, Jian Peng, and Alexander T Ihler. 2012. Variational inference for crowdsourcing. In Advances in neural information processing systems. 692--700.
[47]
Yao Ma, Alexander Olshevsky, Csaba Szepesvari, and Venkatesh Saligrama. 2018. Gradient descent for sparse rank-one matrix completion for crowd-sourced aggregation of sparsely interacting workers. In International Conference on Machine Learning. PMLR, 3335--3344.
[48]
Edoardo Manino, Long Tran-Thanh, and Nicholas R Jennings. 2016. Efficiency of active learning for the allocation of workers on crowdsourced classification tasks. arXiv preprint arXiv:1610.06106 (2016).
[49]
Jeremy D Merrel, Pier F Cirillo, Pauline M Schwartz, and Jeffrey Webb. 2015. Multiple-Choice Testing Using Immediate Feedback-Assessment Technique (IF AT®) Forms: Second-Chance Guessing vs. Second-Chance Learning? (2015).
[50]
Emily Mower, Angeliki Metallinou, Chi-Chun Lee, Abe Kazemzadeh, Carlos Busso, Sungbok Lee, and Shrikanth Narayanan. 2009. Interpreting ambiguous emotional expressions. In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE, 1--8.
[51]
Susanne Narciss, Sergey Sosnovsky, Lenka Schnaubert, Eric Andrès, Anja Eichelmann, George Goguadze, and Erica Melis. 2014. Exploring feedback and student characteristics relevant for personalizing feedback strategies. Computers & Education, Vol. 71 (2014), 56--76.
[52]
Stefanie Nowak and Stefan Rü ger. 2010. How Reliable are Annotations via Crowdsourcing. In Proceedings of the International Conference on Multimedia Information Retrieval. 557. https://doi.org/10.1145/1743384.1743478
[53]
Charles Kay Ogden and Ivor Armstrong Richards. 1923. The Meaning of Meaning: A Study of the Influence of Language upon Thought and of the Science of Symbolism. Vol. 29. K. Paul, Trench, Trubner & Company, Limited.
[54]
Sylvester Olubolu Orimaye, Saadat M Alhashmi, and Siew Eu-gene. 2012. Sentiment analysis amidst ambiguities in YouTube comments on Yoruba language (nollywood) movies. In Proceedings of the 21st International Conference on World Wide Web. 583--584.
[55]
Navindra Persaud, Peter McLeod, and Alan Cowey. 2007. Post-Decision Wagering Objectively Measures Awareness. Nature Neuroscience, Vol. 10, 2 (2007), 257--261. https://doi.org/10.1038/nn1840
[56]
Vikas C Raykar, Shipeng Yu, Linda H Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca Bogoni, and Linda Moy. 2010. Learning from crowds. Journal of Machine Learning Research, Vol. 11, 4 (2010).
[57]
Gaurav Sahu. 2019. Multimodal speech emotion recognition and ambiguity resolution. arXiv preprint arXiv:1904.06022 (2019).
[58]
Justin Salamon, Juan Pablo Bello, Andrew Farnsworth, Matt Robbins, Sara Keen, Holger Klinck, and Steve Kelling. 2016. Towards the Automatic Classification of Avian Flight Calls for Bioacoustic Monitoring. PLOS ONE, Vol. 11, 11 (2016), 1--26. https://doi.org/10.1371/journal.pone.0166866
[59]
Justin Salamon, Christopher Jacoby, and Juan Pablo Bello. 2014. A Dataset and Taxonomy for Urban Sound Research (MM '14). ACM, New York, NY, USA, 1041--1044. https://doi.org/10.1145/2647868.2655045
[60]
Justin Salamon, Duncan MacConnell, Mark Cartwright, Peter Li, and Juan Pablo Bello. 2017. Scaper: A library for soundscape synthesis and augmentation. In Applications of Signal Processing to Audio and Acoustics (WASPAA), 2017 IEEE Workshop on. IEEE, 344--348.
[61]
Kristian Sandberg, Bert Timmermans, Morten Overgaard, and Axel Cleeremans. 2010. Measuring Consciousness: Is One Measure Better Than the Other? Consciousness and Cognition, Vol. 19, 4 (2010), 1069--1078. https://doi.org/10.1016/j.concog.2009.12.013
[62]
Nihar B. Shah, Sivaraman Balakrishnan, and Martin J. Wainwright. 2016. A Permutation-based Model for Crowd Labeling: Optimal Estimation and Robustness. CoRR, Vol. abs/1606.09632 (2016). arxiv: 1606.09632 http://arxiv.org/abs/1606.09632
[63]
Victor S Sheng, Foster Provost, and Panagiotis G Ipeirotis. 2008. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 614--622.
[64]
Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fast--but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, 254--263.
[65]
Jean Y. Song, Raymond Fok, Juho Kim, and Walter S. Lasecki. 2019. FourEyes: Leveraging Tool Diversity as a Means to Improve Aggregate Accuracy in Crowdsourcing. ACM Trans. Interact. Intell. Syst., Vol. 10, 1, Article 3 (Aug. 2019), 30 pages. https://doi.org/10.1145/3237188
[66]
Bettina Studer and Luke Clark. 2011. Place Your Bets: Psychophysiological Correlates of Decision-Making Under Risk. Cognitive, Affective, & Behavioral Neuroscience, Vol. 11, 2 (2011), 144--158. https://doi.org/10.3758/s13415-011-0025--2
[67]
Tian Tian and Jun Zhu. 2015. Max-margin majority voting for learning from crowds. In Advances in neural information processing systems. 1621--1629.
[68]
Matteo Venanzi, John Guiver, Gabriella Kazai, Pushmeet Kohli, and Milad Shokouhi. 2014. Community-based bayesian aggregation models for crowdsourcing. In Proceedings of the 23rd international conference on World wide web. 155--164.
[69]
Willem A. Wagenaar and Gideon B. Keren. 1986. Does The Expert Know? The Reliability of Predictions and Confidence Ratings of Experts. In Intelligent Decision Support in Process Environments, Erik Hollnagel, Giuseppe Mancini, and David D. Woods (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 87--103.
[70]
Kittitas Wancham and Kamonwan Tangdhanakanond. 2020. Effects of Feedback Types and Opportunities to Change Answers on Achievement and Ability to Solve Physics Problems. Research in Science Education (2020), 1--18.
[71]
Shinji Watanabe, Takaaki Hori, Jonathan Le Roux, and John R. Hershey. 2017. Student-Teacher Network Learning with Enhanced Features. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 5275--5279.
[72]
Peter Welinder, Steve Branson, Pietro Perona, and Serge J Belongie. 2010. The multidimensional wisdom of crowds. In Advances in neural information processing systems. 2424--2432.
[73]
Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R Movellan, and Paul L Ruvolo. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in neural information processing systems. 2035--2043.
[74]
Jacob O Wobbrock, Leah Findlater, Darren Gergle, and James J Higgins. 2011. The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI conference on human factors in computing systems. 143--146.
[75]
Ali Yadollahi, Ameneh Gholipour Shahraki, and Osmar R Zaiane. 2017. Current state of text sentiment analysis from opinion to emotion mining. ACM Computing Surveys (CSUR), Vol. 50, 2 (2017), 1--33.
[76]
Yi-Hsuan Yang, Yu-Ching Lin, Ya-Fan Su, and Homer H Chen. 2008. A regression approach to music emotion recognition. IEEE Transactions on audio, speech, and language processing, Vol. 16, 2 (2008), 448--457.
[77]
Biqiao Zhang, Georg Essl, and Emily Mower Provost. 2017. Predicting the distribution of emotion perception: capturing inter-rater variability. In Proceedings of the 19th ACM International Conference on Multimodal Interaction. 51--59.
[78]
Dengyong Zhou, Sumit Basu, Yi Mao, and John C Platt. 2012. Learning from the wisdom of crowds by minimax entropy. In Advances in neural information processing systems. 2195--2203.
[79]
Dengyong Zhou, Qiang Liu, John Platt, and Christopher Meek. 2014. Aggregating ordinal labels from crowds by minimax conditional entropy. In International conference on machine learning. 262--270.

Cited By

View all
  • (2024)RCTD: Reputation-Constrained Truth Discovery in Sybil Attack Crowdsourcing EnvironmentProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671803(1313-1324)Online publication date: 25-Aug-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 6, Issue CSCW1
CSCW1
April 2022
2511 pages
EISSN:2573-0142
DOI:10.1145/3530837
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 April 2022
Published in PACMHCI Volume 6, Issue CSCW1

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. audio annotation
  2. crowdsourcing
  3. machine learning

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)190
  • Downloads (Last 6 weeks)39
Reflects downloads up to 17 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)RCTD: Reputation-Constrained Truth Discovery in Sybil Attack Crowdsourcing EnvironmentProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671803(1313-1324)Online publication date: 25-Aug-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media