Skip to main content

Multi-input CNN for Text Classification in Commercial Scenarios

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11506))

Included in the following conference series:

Abstract

In this work we describe a multi-input Convolutional Neural Network for text classification which allows for combining text preprocessed at word level, byte pair encoding level and character level. We conduct experiments on different datasets and we compare the results obtained with other classifiers. We apply the developed model to two different practical use cases: (1) classifying ingredients into their corresponding classes by means of a corpus provided by Northfork; and (2) classifying texts according to the English level of their corresponding writers by means of a corpus provided by ProvenWord. Additionally, we perform experiments on a standard classification task using Yahoo! Answers and GermEval2017 task A datasets. We show that the developed architecture obtains satisfactory results with these corpora, and we compare results obtained for each dataset with different state-of-the-art approaches, obtaining very promising results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.h5py.org/.

  2. 2.

    https://bit.ly/2DwXyME L6 - Yahoo! Answers Comprehensive Questions and Answers version 1.0 (multi part).

  3. 3.

    Note that maximum length of BPE and character sequences coincide. However, this does not mean that BPE splits sequences into characters, given that the BPE implementation used adds extra tokens for later recovering the original words.

  4. 4.

    https://github.com/dmlc/xgboost.

References

  1. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd KDD - ACM SIGKDD, pp. 785–794. ACM (2016)

    Google Scholar 

  2. Chollet, F., et al.: Keras: deep learning library for Theano and TensorFlow, vol. 7(8) (2015). https://keras.io/k

  3. Hövelmann, L., Allee, S., Friedrich, C.M.: Fasttext and gradient boosted trees at GermEval-2017 on relevance classification and document-level polarity. In: Shared Task on Aspect-Based Sentiment in Social Media Customer Feedback, pp. 30–35 (2017)

    Google Scholar 

  4. Jiang, L., Li, C., Wang, S., Zhang, L.: Deep feature weighting for Naive Bayes and its application to text classification. Eng. Appl. AI 52, 26–39 (2016)

    Google Scholar 

  5. Jiang, M., et al.: Text classification based on deep belief network and softmax regression. Neural Comput. Appl. 29(1), 61–70 (2018)

    Article  Google Scholar 

  6. Joulin, A., Grave, E., Bojanowski, P., Mikolov, T.: Bag of tricks for efficient text classification. In: Proceedings of EACL 2017, vol. 2, pp. 427–431. ACL (2017)

    Google Scholar 

  7. Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)

  8. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014)

    Google Scholar 

  9. Koehn, P.: Statistical significance tests for machine translation evaluation. In: Proceedings of the EMNLP 2004, pp. 388–395 (2004)

    Google Scholar 

  10. Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: AAAI, vol. 333, pp. 2267–2273 (2015)

    Google Scholar 

  11. Liang, D., Xu, W., Zhao, Y.: Combining word-level and character-level representations for relation classification of informal text. In: Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 43–47. ACL (2017)

    Google Scholar 

  12. Lilleberg, J., Zhu, Y., Zhang, Y.: Support vector machines and Word2vec for text classification with semantic features. In: ICCI*CC, pp. 136–140. IEEE (2015)

    Google Scholar 

  13. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: Proceedings of Workshop at ICLR 2013 (2013)

    Google Scholar 

  14. Mishra, P., Mujadia, V., Lanka, S.: Germeval 2017: sequence based models for customer feedback analysis. In: Shared Task on Aspect-Based Sentiment in Social Media Customer Feedback, pp. 36–42 (2017)

    Google Scholar 

  15. Bridle, J.S.: Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In: Soulié, F.F., Hérault, J. (eds.) Neurocomputing. NATO ASI Series (Series F: Computer and Systems Sciences), vol. 68, pp. 227–236. Springer, Heidelberg (1990). https://doi.org/10.1007/978-3-642-76153-9_28

    Chapter  Google Scholar 

  16. Sayyed, Z.A., Dakota, D., Kübler, S.: IDS IUCL: investigating feature selection and oversampling for GermEval2017. In: Shared Task on Aspect-based Sentiment in Social Media Customer Feedback, pp. 43–48 (2017)

    Google Scholar 

  17. Sennrich, R., Haddow, B., Birch, A.: Neural machine translation of rare words with subword units. In: Proceedings of ACL 2016, vol. 1, pp. 1715–1725. ACL (2016)

    Google Scholar 

  18. Tan, Y.: An improved KNN text classification algorithm based on K-medoids and rough set. In: 2018 10th International Conference on IHMSC, vol. 1, pp. 109–113. IEEE (2018)

    Google Scholar 

  19. Wojatzki, M., Ruppert, E., Holschneider, S., Zesch, T., Biemann, C.: GermEval 2017: shared task on aspect-based sentiment in social media customer feedback

    Google Scholar 

  20. Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolutional network. CoRR abs/1505.00853 (2015)

    Google Scholar 

  21. Xu, J., Zhang, C., Zhang, P., Song, D.: Text classification with enriched word features. In: Geng, X., Kang, B.-H. (eds.) PRICAI 2018. LNCS (LNAI), vol. 11013, pp. 274–281. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-97310-4_31

    Chapter  Google Scholar 

  22. Zhang, X., LeCun, Y.: Text understanding from scratch. arXiv preprint arXiv:1502.01710 (2015)

Download references

Acknowledgments

Work partially supported by MINECO under grant DI-15-08169 and by Sciling under its R+D programme. The authors would like to thank NVIDIA for their donation of Titan Xp GPU that allowed to conduct this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zuzanna Parcheta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Parcheta, Z., Sanchis-Trilles, G., Casacuberta, F., Redahl, R. (2019). Multi-input CNN for Text Classification in Commercial Scenarios. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2019. Lecture Notes in Computer Science(), vol 11506. Springer, Cham. https://doi.org/10.1007/978-3-030-20521-8_49

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-20521-8_49

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-20520-1

  • Online ISBN: 978-3-030-20521-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics