skip to main content
10.1145/3557915.3561006acmconferencesArticle/Chapter ViewAbstractPublication PagesgisConference Proceedingsconference-collections
demonstration

Active learning for transformer models in direction query tagging

Published:22 November 2022Publication History

ABSTRACT

Correct understanding of direction queries is essential in map search for providing accurate direction related results, including routing, travel distance, travel time estimation, etc. Slot tagging is the process of recognizing and annotating query terms as entities such as source, destination, travel mode, travel distance, or travel time, so that downstream map search components can surface the expected result. Transformer-based models have achieved state-of-the-art performance on various language understanding tasks, including slot tagging. However, such models require either good quality labeled data for fine-tuning or large amount of labeled data for full training. Active learning provides a solution for improving training efficiency by selecting only a small amount of very informative queries for labelling. It is not yet clear, though, how to properly apply active learning for transformer-based language models. In this paper, we propose a novel active learning method designed specifically for transformer models and demonstrate its effectiveness for slot tagging of direction queries.

References

  1. Prajjwal Bhargava, Aleksandr Drozd, and Anna Rogers. 2021. Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics. arXiv:2110.01518 [cs.CL]Google ScholarGoogle Scholar
  2. David A Cohn, Zoubin Ghahramani, and Michael I Jordan. 1996. Active learning with statistical models. Journal of artificial intelligence research 4 (1996), 129--145.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised Cross-lingual Representation Learning at Scale. CoRR abs/1911.02116 (2019). arXiv:1911.02116 http://arxiv.org/abs/1911.02116Google ScholarGoogle Scholar
  4. Helen Craig, Dragomir Yankov, Renzhong Wang, Pavel Berkhin, and Wei Wu. 2019. Scaling Address Parsing Sequence Models through Active Learning. In Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (Chicago, IL, USA) (SIGSPATIAL '19). Association for Computing Machinery, New York, NY, USA, 424--427. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).Google ScholarGoogle Scholar
  6. Liat Ein Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active learning for BERT: an empirical study. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 7949--7962.Google ScholarGoogle Scholar
  7. Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep Active Learning for Named Entity Recognition. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Association for Computational Linguistics, Vancouver, Canada, 252--256. Google ScholarGoogle ScholarCross RefCross Ref
  8. Aditya Siddhant and Zachary C. Lipton. 2018. Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study. arXiv:arXiv:1808.05697Google ScholarGoogle Scholar
  9. Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation. CoRR abs/1908.08962 (2019). arXiv:1908.08962 http://arxiv.org/abs/1908.08962Google ScholarGoogle Scholar
  10. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations. 38--45.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Active learning for transformer models in direction query tagging

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGSPATIAL '22: Proceedings of the 30th International Conference on Advances in Geographic Information Systems
          November 2022
          806 pages
          ISBN:9781450395298
          DOI:10.1145/3557915

          Copyright © 2022 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 22 November 2022

          Check for updates

          Qualifiers

          • demonstration

          Acceptance Rates

          Overall Acceptance Rate220of1,116submissions,20%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader