ABSTRACT
Text classification is a basic and important work in natural language processing (NLP). The existing text classification models are powerful. However, training such a model requires a large number of labeled training sets, but in the actual scene, insufficient data is often faced with. The lack of data is mainly divided into two categories: cold start and low resources. To solve this problem, text enhancement methods are usually used. In this paper, the source text enhancement and representation enhancement are combined to improve the enhancement effect. Five sets of experiments are designed to verify that our method is effective on different data sets and different classifiers. The simulation results show that the accuracy is improved and the generalization ability of the classifier is enhanced to some extent. We also find that the enhancement factor and the size of the training data set are not positively related to the enhancement effect. Therefore, the enhancement factor needs to be selected according to the characteristics of the data.
Supplemental Material
Available for Download
Presentation slides
- Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2020. Do Not Have Enough Data? Deep Learning to the Rescue!. In AAAI. 7383–7390.Google Scholar
- Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. arXiv preprint arXiv:1705.00440(2017).Google Scholar
- Zhiting Hu, Bowen Tan, Russ R Salakhutdinov, Tom M Mitchell, and Eric P Xing. 2019. Learning data manipulation for augmentation and weighting. In Advances in Neural Information Processing Systems. 15764–15775.Google Scholar
- Michał Jungiewicz and Aleksander Smywiński-Pohl. 2019. Towards textual data augmentation for neural networks: synonyms and maximum loss. Computer Science 20(2019).Google Scholar
- Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201(2018).Google Scholar
- Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha Talukdar. 2019. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 3609–3619.Google ScholarCross Ref
- Nikolaos Malandrakis, Minmin Shen, Anuj Goyal, Shuyang Gao, Abhishek Sethi, and Angeliki Metallinou. 2019. Controlled text generation for data augmentation in intelligent artificial agents. arXiv preprint arXiv:1910.03487(2019).Google Scholar
- Vukosi Marivate and Tshephisho Sefara. 2020. Improving short text classification through global augmentation methods. In International Cross-Domain Conference for Machine Learning and Knowledge Extraction. Springer, 385–399.Google ScholarCross Ref
- Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2818–2826.Google ScholarCross Ref
- Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199(2013).Google Scholar
- Fabio Henrique Kiyoiti dos Santos Tanakaand Claus Aranha. 2019. Data augmentation using GANs. arXiv preprint arXiv:1904.09135(2019).Google Scholar
- Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196(2019).Google Scholar
- Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional BERT contextual augmentation. In International Conference on Computational Science. Springer, 84–95.Google ScholarDigital Library
- Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. 2019. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848(2019).Google Scholar
- Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2017. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412(2017).Google Scholar
Recommendations
Fractional poisson enhancement model for text detection and recognition in video frames
Performing Laplacian operation on video images is a common technique to improve image contrast to achieve good text detection and recognition accuracies. However, it is a fact that when Laplacian operation enhances contrast, at the same time it ...
Text classification from unlabeled documents with bootstrapping and feature projection techniques
Many machine learning algorithms have been applied to text classification tasks. In the machine learning paradigm, a general inductive process automatically builds a text classifier by learning, generally known as supervised learning. However, the ...
Using the feature projection technique based on a normalized voting method for text classification
This paper proposes a new approach for text categorization, based on a feature projection technique. In our approach, training data are represented as the projections of training documents on each feature. The voting for a classification is processed on ...
Comments