Bengali text document categorization based on very deep convolution neural network

https://doi.org/10.1016/j.eswa.2021.115394Get rights and content
Under a Creative Commons license
open access

Highlights

  • Illustrated the development of benchmark text corpus for the low-resource languages.

  • Presented an algorithm for optimisation of hyperparameters of embedding models.

  • Evaluated several embedding models using semantic and syntactic similarity measures.

  • Integrated embedding and very deep learning models to improve text classification.

  • Evaluated the proposed and existing models on built corpus for text classification.

Abstract

In recent years, the amount of digital text contents or documents in the Bengali language has increased enormously on online platforms due to the effortless access of the Internet via electronic gadgets. As a result, an enormous amount of unstructured data is created that demands much time and effort to organize, search or manipulate. To manage such a massive number of documents effectively, an intelligent text document classification system is proposed in this paper. Intelligent classification of text document in a resource-constrained language (like Bengali) is challenging due to unavailability of linguistic resources, intelligent NLP tools, and larger text corpora. Moreover, Bengali texts are available in two morphological variants (i.e., Sadhu-bhasha and Cholito-bhasha) making the classification task more complicated. The proposed intelligent text classification model comprises GloVe embedding and Very Deep Convolution Neural Network (VDCNN) classifier. Due to the unavailability of standard corpus, this work develops a large Embedding Corpus (EC) containing 969,000 unlabelled texts and Bengali Text Classification Corpus (BDTC) containing 156,207 labelled documents arranged into 13 categories. Moreover, this work proposes the Embedding Parameters Identification (EPI) Algorithm, which selects the best embedding parameters for low-resource languages (including Bengali). Evaluation of 165 embedding models with intrinsic evaluators (semantic & syntactic similarity measures) shows that the GloVe model is more suitable (regarding Spearman & Pearson correlation) than other embeddings (Word2Vec, FastText, m-BERT) in Bengali text. Experimental results on the test dataset confirm that the proposed GloVe + VDCNN model outperformed (achieving the highest 96.96% accuracy) the other classification models and existing methods to perform the Bengali text classification task.

Keywords

Intelligent systems
Natural language processing
Low resource language
Semantic feature extraction
Document categorization
Deep convolution network

Cited by (0)

1

ORCID: 0000-0002-7941-9124

2

ORCID: 0000-0001-8806-708X

3

ORCID: 0000-0002-0642-2357

4

ORCID: 0000-0003-1740-5517