skip to main content
10.1145/2187836.2187955acmotherconferencesArticle/Chapter ViewAbstractPublication PagesthewebconfConference Proceedingsconference-collections
research-article

Mr. LDA: a flexible large scale topic modeling package using variational inference in MapReduce

Published: 16 April 2012 Publication History

Abstract

Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for exploring document collections. Because of the increasing prevalence of large datasets, there is a need to improve the scalability of inference for LDA. In this paper, we introduce a novel and flexible large scale topic modeling package in MapReduce (Mr. LDA). As opposed to other techniques which use Gibbs sampling, our proposed framework uses variational inference, which easily fits into a distributed environment. More importantly, this variational implementation, unlike highly tuned and specialized implementations based on Gibbs sampling, is easily extensible. We demonstrate two extensions of the models possible with this scalable framework: informed priors to guide topic discovery and extracting topics from a multilingual corpus. We compare the scalability of Mr. LDA against Mahout, an existing large scale topic modeling package. Mr. LDA out-performs Mahout both in execution speed and held-out likelihood.

References

[1]
D. M. Blei, A. Ng, and M. Jordan, "Latent Dirichlet allocation," Journal of Machine Learning Research, vol. 3, pp. 993--1022, 2003.
[2]
F. Rob, L. Fei-Fei, P. Pietro, and Z. Andrew, "Learning object categories from Google's image search." in International Conference on Computer Vision, 2005.
[3]
C. Wang, D. Blei, and L. Fei-Fei, "Simultaneous image classification and annotation," in Computer Vision and Pattern Recognition, 2009.
[4]
E. M. Airoldi, D. M. Blei, S. E. Fienberg, and E. P. Xing, "Mixed membership stochastic blockmodels," Journal of Machine Learning Research, vol. 9, pp. 1981--2014, 2008.
[5]
D. Falush, M. Stephens, and J. K. Pritchard, "Inference of population structure using multilocus genotype data: linked loci and correlated allele frequencies." Genetics, vol. 164, no. 4, pp. 1567--1587, 2003.
[6]
J. Boyd-Graber and P. Resnik, "Holistic sentiment analysis across languages: Multilingual supervised latent Dirichlet allocation," in Proceedings of Emperical Methods in Natural Language Processing, 2010.
[7]
T. L. Griffiths, M. Steyvers, D. M. Blei, and J. B. Tenenbaum, "Integrating topics and syntax," in Proceedings of Advances in Neural Information Processing Systems, 2005.
[8]
J. Dean and S. Ghemawat, "MapReduce: Simplified data processing on large clusters," in Symposium on Operating System Design and Implementation, San Francisco, California, 2004, pp. 137--150.
[9]
C. Dyer, A. Cordova, A. Mont, and J. Lin, "Fast, easy and cheap: Construction of statistical machine translation models with MapReduce," in Workshop on Statistical Machine Translation in Association for Computational Linguistics 2008), Columbus, Ohio, 2008.
[10]
T. Brants, A. C. Popat, P. Xu, F. J. Och, and J. Dean, "Large language models in machine translation," in Proceedings of Emperical Methods in Natural Language Processing, 2007.
[11]
S. B. Cohen and N. A. Smith, "Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction," in Conference of the North American Chapter of the Association for Computational Linguistics, 2009.
[12]
D. Mimno, H. Wallach, J. Naradowsky, D. Smith, and A. McCallum, "Polylingual topic models," in Proceedings of Emperical Methods in Natural Language Processing, 2009, IR, p. 880\textendash889.
[13]
A. K. McCallum, "Mallet: A machine learning for language toolkit," 2002, http://www.cs.umass.edu/ mccallum/mallet.
[14]
F. Yan, N. Xu, and Y. Qi, "Parallel inference for latent dirichlet allocation on graphics processing units," in Proceedings of Advances in Neural Information Processing Systems, 2009, pp. 2134--2142.
[15]
A. Asuncion, P. Smyth, and M. Welling, "Asynchronous distributed learning of topic models," in Proceedings of Advances in Neural Information Processing Systems, 2008.
[16]
R. Nallapati, W. Cohen, and J. Lafferty, "Parallelized variational EM for latent Dirichlet allocation: An experimental evaluation of speed and scalability," in ICDMW, 2007.
[17]
Y. Wang, H. Bai, M. Stanton, W.-Y. Chen, and E. Y. Chang, "PLDA: parallel latent Dirichlet allocation for large-scale applications," in International Conference on Algorithmic Aspects in Information and Management, 2009.
[18]
A. J. Smola and S. Narayanamurthy, "An architecture for parallel topic models," International Conference on Very Large Databases, vol. 3, 2010.
[19]
A. S. Foundation, I. Drost, T. Dunning, J. Eastman, O. Gospodnetic, G. Ingersoll, J. Mannix, S. Owen, and K. Wettin, "Apache Mahout," 2010, http://mloss.org/software/view/144/.
[20]
R. M. Neal, "Probabilistic inference using Markov chain Monte Carlo methods," University of Toronto, Tech. Rep. CRG-TR-93--1, 1993.
[21]
C. Robert and G. Casella, Monte Carlo Statistical Methods, ser. Springer Texts in Statistics. New York, NY: Springer-Verlag, 2004.
[22]
Y. W. Teh, "A hierarchical Bayesian language model based on Pitman-Yor processes," in Proceedings of the Association for Computational Linguistics, 2006.
[23]
T. L. Griffiths and M. Steyvers, "Finding scientific topics," Proceedings of the National Academy of Sciences, vol. 101, no. Suppl 1, pp. 5228--5235, 2004.
[24]
J. R. Finkel, T. Grenager, and C. D. Manning, "The infinite tree," in Proceedings of the Association for Computational Linguistics, 2007.
[25]
D. Newman, A. Asuncion, P. Smyth, and M. Welling, "Distributed Inference for Latent Dirichlet Allocation," in Proceedings of Advances in Neural Information Processing Systems, 2008.
[26]
A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. Smola, "Scalable inference in latent variable models," in WSDM, 2012, pp. 123--132.
[27]
L. Yao, D. Mimno, and A. McCallum, "Efficient methods for topic model inference on streaming document collections," in Knowledge Discovery and Data Mining, 2009.
[28]
M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, "An introduction to variational methods for graphical models," Machine Learning, vol. 37, no. 2, pp. 183--233, 1999.
[29]
M. J. Wainwright and M. I. Jordan, "Graphical models, exponential families, and variational inference," Foundations and Trends in Machine Learning, vol. 1, no. 1--2, pp. 1--305, 2008.
[30]
D. M. Blei and M. I. Jordan, "Variational inference for Dirichlet process mixtures," Journal of Bayesian Analysis, vol. 1, no. 1, pp. 121--144, 2005.
[31]
Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei, "Hierarchical Dirichlet processes," Journal of the American Statistical Association, vol. 101, no. 476, pp. 1566--1581, 2006.
[32]
K. Kurihara, M. Welling, and N. Vlassis, "Accelerated variational Dirichlet process mixtures," in Proceedings of Advances in Neural Information Processing Systems, Cambridge, MA, 2007.
[33]
J. Wolfe, A. Haghighi, and D. Klein, "Fully distributed EM for very large datasets," in Proceedings of International Conference of Machine Learning, 2008, pp. 1184--1191.
[34]
H. Wallach, D. Mimno, and A. McCallum, "Rethinking LDA: Why priors matter," in Proceedings of Advances in Neural Information Processing Systems, 2009.
[35]
T. White, Hadoop: The Definitive Guide (Second Edition), 2nd ed., M. Loukides, Ed. O'Reilly, 2010.
[36]
A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh, "On smoothing and inference for topic models," in Proceedings of Uncertainty in Artificial Intelligence, 2009.
[37]
T. P. Minka, "Estimating a dirichlet distribution," Microsoft, Tech. Rep., 2000, http://research.microsoft.com/en-us/um/people/minka/papers/dirichlet/.
[38]
J. Lin and C. Dyer, Data-Intensive Text Processing with MapReduce, ser. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers, 2010.
[39]
C. Lin and Y. He, "Joint sentiment/topic model for sentiment analysis," in Proceedings of the ACM International Conference on Information and Knowledge Management, 2009.
[40]
J. W. Pennebaker and M. E. Francis, Linguistic Inquiry and Word Count, 1st ed. Lawrence Erlbaum, August 1999.
[41]
D. M. Blei and J. D. McAuliffe, "Supervised topic models," in Proceedings of Advances in Neural Information Processing Systems. MIT Press, 2007.
[42]
J. Boyd-Graber and D. M. Blei, "Syntactic topic models," in Proceedings of Advances in Neural Information Processing Systems, 2008.
[43]
NIST, "Trec special database 22," 1994, http://www.nist.gov/srd/nistsd22.htm.
[44]
M. Koppel, J. Schler, S. Argamon, and J. Pennebaker, "Effects of age and gender on blogging," in In AAAI 2006 Symposium on Computational Approaches to Analysing Weblogs, 2006.
[45]
D. Hu and L. K. Saul, "A probabilistic model of unsupervised learning for musical-key profiles," in International Society for Music Information Retrieval Conference, 2009.
[46]
G. Maskeri, S. Sarkar, and K. Heafield, "Mining business topics in source code using latent dirichlet allocation," in ISEC, 2008.
[47]
D. Talbot and M. Osborne, "Smoothed bloom filter language models: Tera-scale lms on the cheap," in Proceedings of the Association for Computational Linguistics, 2007, pp. 468--476.
[48]
J. Winn and C. M. Bishop, "Variational message passing," Journal of Machine Learning Research, vol. 6, pp. 661--694, 2005.
[49]
M. Hoffman, D. M. Blei, and F. Bach, "Online learning for latent dirichlet allocation," in Proceedings of Advances in Neural Information Processing Systems, 2010.

Cited By

View all
  • (2024)Text mining of syntactic complexity in L2 writing: an LDA topic modeling approachInternational Review of Applied Linguistics in Language Teaching10.1515/iral-2024-0132Online publication date: 19-Nov-2024
  • (2024)TopicRefiner: Coherence-Guided Steerable LDA for Visual Topic EnhancementIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.326689030:8(4542-4557)Online publication date: Aug-2024
  • (2023)Distributed TrainingProbabilistic Topic Models10.1007/978-981-99-2431-8_7(95-101)Online publication date: 9-Jun-2023
  • Show More Cited By

Index Terms

  1. Mr. LDA: a flexible large scale topic modeling package using variational inference in MapReduce

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      WWW '12: Proceedings of the 21st international conference on World Wide Web
      April 2012
      1078 pages
      ISBN:9781450312295
      DOI:10.1145/2187836
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      • Univ. de Lyon: Universite de Lyon

      In-Cooperation

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 16 April 2012

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. mapreduce
      2. scalability
      3. topic models

      Qualifiers

      • Research-article

      Conference

      WWW 2012
      Sponsor:
      • Univ. de Lyon
      WWW 2012: 21st World Wide Web Conference 2012
      April 16 - 20, 2012
      Lyon, France

      Acceptance Rates

      Overall Acceptance Rate 1,899 of 8,196 submissions, 23%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)35
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 16 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Text mining of syntactic complexity in L2 writing: an LDA topic modeling approachInternational Review of Applied Linguistics in Language Teaching10.1515/iral-2024-0132Online publication date: 19-Nov-2024
      • (2024)TopicRefiner: Coherence-Guided Steerable LDA for Visual Topic EnhancementIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.326689030:8(4542-4557)Online publication date: Aug-2024
      • (2023)Distributed TrainingProbabilistic Topic Models10.1007/978-981-99-2431-8_7(95-101)Online publication date: 9-Jun-2023
      • (2022)A Study of Hadoop and Mapping Approach Techniques on Big Data StrategiesInternational Journal of Scientific Research in Science and Technology10.32628/IJSRST229656(377-383)Online publication date: 20-Nov-2022
      • (2022)Revisiting the Recent History of Consumer Behavior in Marketing Journals: A Topic Modeling PerspectiveReview of Marketing Science10.1515/roms-2021-008620:1(113-145)Online publication date: 16-Mar-2022
      • (2022)COVID-19: Detecting depression signals during stay-at-home periodHealth Informatics Journal10.1177/1460458222109493128:2(146045822210949)Online publication date: 21-Apr-2022
      • (2022)B-AIS: An Automated Process for Black-box Evaluation of Visual Perception in AI-enabled Software against Domain SemanticsProceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering10.1145/3551349.3561162(1-13)Online publication date: 10-Oct-2022
      • (2022)CADE: The Missing Benchmark in Evaluating Dataset Requirements of AI-enabled Software2022 IEEE 30th International Requirements Engineering Conference (RE)10.1109/RE54965.2022.00013(64-76)Online publication date: Aug-2022
      • (2022)Cluster analysis of urdu tweetsJournal of King Saud University - Computer and Information Sciences10.1016/j.jksuci.2020.08.00834:5(2170-2179)Online publication date: May-2022
      • (2022)Short text topic modelling approaches in the context of big data: taxonomy, survey, and analysisArtificial Intelligence Review10.1007/s10462-022-10254-w56:6(5133-5260)Online publication date: 26-Oct-2022
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media