Abstract
As an extensively applied task in the domain of natural language processing, text classification has moved a long way since deep learning technology develop rapidly. Especially after the pre-trained models arrived, the classification performance has been tremendous improved. However, complicated financial text often has multiple structured labels, and there are also many difficulties to have large amounts of labeled samples to ensure high-quality predictions. The existing competitive classification models can only solve one of the problems. To address these issues, we propose a hierarchical classification structure with two level. In the first level, the basic classifier is enhanced by label confusion algorithm to mine the dependency between labels and samples. In the second level, a few-shot classification model under meta-learning framework can complete the classification task based on the predictions from the previous level and a few labeled training samples. We explain our model on two large Chinese financial datasets, and find that it has superiority in both performance and computational expenditure compared to existing competitive classification model, few-sample classification model and hierarchical classification model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Li, S., Zhao, Z., Hu, R., Li, W., Liu, T., Du, X.: Analogical reasoning on chinese morphological and semantic relations. arXiv preprint arXiv:1805.06504 (2018)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Bao, Y., Wu, M., Chang, S., Barzilay, R.: Few-shot text classification with distributional signatures. arXiv preprint arXiv:1908.06039 (2019)
Huang, W., et al.: Hierarchical multi-label text classification: an attention-based recurrent network approach. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pp. 1051–1060 (2019)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Maas, A., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150 (2011)
Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 (2015)
Zhu, X., Sobihani, P., Guo, H.: Long short-term memory over recursive structures. In: International Conference on Machine Learning, pp. 1604–1612. PMLR (2015)
Sriram, B., Fuhry, D., Demir, E., Ferhatosmanoglu, H., Demirbas, M.: Short text classification in twitter to improve information filtering. In: Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 841–842 (2010)
Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)
Liu, P., Qiu, X., Chen, X., Wu, S., Huang, X.J.: Multi-timescale long short-term memory neural network for modelling sentences and documents. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 2326–2335 (2015)
Li, Y., Sun, G., Zhu, Y.: Data imbalance problem in text classification. In: 2010 Third International Symposium on Information Processing, pp. 301–305. IEEE (2010)
Ciravegna, F., et al.: Flexible text classification for financial applications: the FACILE system. In: ECAI, pp. 696–700 (2000)
Stamatatos, E.: Author identification: using text sampling to handle the class imbalance problem. Inf. Process. Manag. 44(2), 790–799 (2008)
Thabtah, F., Hammoud, S., Kamalov, F., Gonsalves, A.: Data imbalance in classification: experimental evaluation. Inf. Sci. 513, 429–441 (2020)
Guo, B., Han, S., Han, X., Huang, H., Lu, T.: Label confusion learning to enhance text classification models. arXiv preprint arXiv:2012.04987 (2020)
Geng, X.: Label distribution learning. IEEE Trans. Knowl. Data Eng. 28(7), 1734–1748 (2016)
Xing, C., Geng, X., Xue, H.: Logistic boosting regression for label distribution learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4489–4497 (2016)
Gu, J., Wang, Y., Chen, Y., Cho, K., Li, V.O.: Meta-learning for low-resource neural machine translation. arXiv preprint arXiv:1808.08437 (2018)
Yu, M., et al.: Diverse few-shot text classification with multiple metrics. arXiv preprint arXiv:1805.07513 (2018)
Kullback, S., Leibler, R.A.: On information and sufficiency. The Ann. Math. Stat. 22(1), 79–86 (1951)
Rakhlin, A.: Convolutional neural networks for sentence classification. GitHub (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, A., Chen, Q., Li, D. (2021). FHTC: Few-Shot Hierarchical Text Classification in Financial Domain. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13109. Springer, Cham. https://doi.org/10.1007/978-3-030-92270-2_56
Download citation
DOI: https://doi.org/10.1007/978-3-030-92270-2_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-92269-6
Online ISBN: 978-3-030-92270-2
eBook Packages: Computer ScienceComputer Science (R0)