ABSTRACT
Correct understanding of direction queries is essential in map search for providing accurate direction related results, including routing, travel distance, travel time estimation, etc. Slot tagging is the process of recognizing and annotating query terms as entities such as source, destination, travel mode, travel distance, or travel time, so that downstream map search components can surface the expected result. Transformer-based models have achieved state-of-the-art performance on various language understanding tasks, including slot tagging. However, such models require either good quality labeled data for fine-tuning or large amount of labeled data for full training. Active learning provides a solution for improving training efficiency by selecting only a small amount of very informative queries for labelling. It is not yet clear, though, how to properly apply active learning for transformer-based language models. In this paper, we propose a novel active learning method designed specifically for transformer models and demonstrate its effectiveness for slot tagging of direction queries.
- Prajjwal Bhargava, Aleksandr Drozd, and Anna Rogers. 2021. Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics. arXiv:2110.01518 [cs.CL]Google Scholar
- David A Cohn, Zoubin Ghahramani, and Michael I Jordan. 1996. Active learning with statistical models. Journal of artificial intelligence research 4 (1996), 129--145.Google ScholarDigital Library
- Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised Cross-lingual Representation Learning at Scale. CoRR abs/1911.02116 (2019). arXiv:1911.02116 http://arxiv.org/abs/1911.02116Google Scholar
- Helen Craig, Dragomir Yankov, Renzhong Wang, Pavel Berkhin, and Wei Wu. 2019. Scaling Address Parsing Sequence Models through Active Learning. In Proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (Chicago, IL, USA) (SIGSPATIAL '19). Association for Computing Machinery, New York, NY, USA, 424--427. Google ScholarDigital Library
- Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).Google Scholar
- Liat Ein Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active learning for BERT: an empirical study. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 7949--7962.Google Scholar
- Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, and Animashree Anandkumar. 2017. Deep Active Learning for Named Entity Recognition. In Proceedings of the 2nd Workshop on Representation Learning for NLP. Association for Computational Linguistics, Vancouver, Canada, 252--256. Google ScholarCross Ref
- Aditya Siddhant and Zachary C. Lipton. 2018. Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study. arXiv:arXiv:1808.05697Google Scholar
- Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation. CoRR abs/1908.08962 (2019). arXiv:1908.08962 http://arxiv.org/abs/1908.08962Google Scholar
- Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations. 38--45.Google ScholarCross Ref
Index Terms
- Active learning for transformer models in direction query tagging
Recommendations
Active learning with direct query construction
KDD '08: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data miningActive learning may hold the key for solving the data scarcity problem in supervised learning, i.e., the lack of labeled data. Indeed, labeling data is a costly process, yet an active learner may request labels of only selected instances, thus reducing ...
Cost‐effective multi‐instance multilabel active learning
AbstractMulti‐instance multi‐label (MIML) Active Learning (M2AL) aims to improve the learner while reducing the cost as much as possible by querying informative labels of complex bags composed of diverse instances. Existing M2AL solutions suffer high ...
Transfer active learning
CIKM '11: Proceedings of the 20th ACM international conference on Information and knowledge managementActive learning traditionally assumes that labeled and unlabeled samples are subject to the same distributions and the goal of an active learner is to label the most informative unlabeled samples. In reality, situations may exist that we may not have ...
Comments