skip to main content
10.1145/3371158.3371403acmotherconferencesArticle/Chapter ViewAbstractPublication PagescodsConference Proceedingsconference-collections
demonstration

Memeify: A Large-Scale Meme Generation System

Published:15 January 2020Publication History

ABSTRACT

Interest in the research areas related to meme propagation and generation has been increasing rapidly in the last couple of years. Meme datasets available online are either specific to a context or contain no class information. Here, we prepare a large-scale dataset of memes with captions and class labels. The dataset consists of 1.1 million meme captions from 128 classes. We also provide a reasoning for the existence of broad categories, called 'themes' across the meme dataset; each theme consists of multiple meme classes. Our generation system uses a trained state-of-the-art transformer based model for caption generation by employing an encoderdecoder architecture. We develop a web interface, called Memeify for users to generate memes of their choice, and explain in detail, the working of individual components of the system. We also perform qualitative evaluation of the generated memes by conducting a user study. A link to the demonstration of the Memeify system is https://youtu.be/P_Tfs0X-czs.

References

  1. Pavel Berkhin. 2006. A Survey of Clustering Data Mining Techniques. In Grouping Multidimensional Data.Google ScholarGoogle Scholar
  2. Florian Colombo, Alexander Seeholzer, and Wulfram Gerstner. 2017. Deep artificial composer: A creative neural network model for automated melody generation. In International Conference on Evolutionary and Biologically Inspired Music and Art. 81--96.Google ScholarGoogle ScholarCross RefCross Ref
  3. R Dawkins. 1976. The Selfish Gene. Oxford University Press, Oxford, UK.Google ScholarGoogle Scholar
  4. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In arXiv preprint arXiv:1810.04805.Google ScholarGoogle Scholar
  5. Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research (2018).Google ScholarGoogle Scholar
  6. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems.Google ScholarGoogle Scholar
  7. Sayan Goswami. 2018. Reddit Dank Memes Dataset. https://www.kaggle.com/sayangoswami/reddit-memes-dataset.Google ScholarGoogle Scholar
  8. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput. 9, 8 (Nov. 1997), 1735--1780. https://doi.org/10.1162/neco.1997.9.8.1735Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Joel Lehman, Sebastian Risi, and Jeff Clune. 2016. Creative generation of 3D objects with deep learning and innovation engines. In Proceedings of the 7th International Conference on Computational Creativity.Google ScholarGoogle Scholar
  10. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task Deep Neural Networks for Natural Language Understanding. In arXiv preprint arXiv:1901.11504.Google ScholarGoogle Scholar
  11. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR (2019).Google ScholarGoogle Scholar
  12. Suraj Nath. 2018. Meme Generator Dataset. https://www.kaggle.com/electron0zero/memegenerator-dataset/home.Google ScholarGoogle Scholar
  13. V Peirson, L Abel, and E Meltem Tolunay. 2018. Dank learning: Generating memes using deep neural networks. arXiv preprint arXiv:1806.04510 (2018).Google ScholarGoogle Scholar
  14. QuickMeme. 2016. Quick Meme Website. http://quickmeme.com/.Google ScholarGoogle Scholar
  15. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog (2019).Google ScholarGoogle Scholar
  16. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google ScholarGoogle Scholar
  17. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. CoRR (2017).Google ScholarGoogle Scholar
  18. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. CoRR (2019).Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    CoDS COMAD 2020: Proceedings of the 7th ACM IKDD CoDS and 25th COMAD
    January 2020
    399 pages
    ISBN:9781450377386
    DOI:10.1145/3371158

    Copyright © 2020 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 15 January 2020

    Check for updates

    Qualifiers

    • demonstration
    • Research
    • Refereed limited

    Acceptance Rates

    CoDS COMAD 2020 Paper Acceptance Rate78of275submissions,28%Overall Acceptance Rate197of680submissions,29%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader