Skip to main content

A Tag-Based Transformer Community Question Answering Learning-to-Rank Model in the Home Improvement Domain

  • Conference paper
  • First Online:
Database and Expert Systems Applications (DEXA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12924))

Included in the following conference series:

Abstract

Community Question Answering (CQA) is an Information Retrieval (IR) task that allows matching complex subjective questions and candidate answers based on user posts in community web forums. User questions and comment-based answers deal with many problems, such as redundancy or ambiguity of linguistic information. In this paper, we propose a pairwise learning-to-rank model community QA model in the home improvement domain. For a user question, this model must rank candidate answers in order of relevance. Our main contribution consists of transformer-based language models using user tags to accurate the model generalisation. To train our model, we also propose a proper CQA dataset in home improvement domain that consists of information extracted from community forums. We evaluate our approach by comparing the performance based on analysis with the state-of-the-art method on text or document similarity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://diy.stackexchange.com/questions/7100/is-there-an-easy-way-to-measure-the-height-of-a-tree.

  2. 2.

    https://dyi.stackexchange.com.

  3. 3.

    http://nlp.stanford.edu/data/glove.6B.zip.

References

  1. Chung, J., Gülçehre, Ç., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR abs/1412.3555 (2014)

    Google Scholar 

  2. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, Minnesota, vol. 1, pp. 4171–4186 (2019)

    Google Scholar 

  3. He, H., Gimpel, K., Lin, J.: Multi-perspective sentence similarity modeling with convolutional neural networks. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1576–1586. Association for Computational Linguistics (ACL), Lisbon, September 2015

    Google Scholar 

  4. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  5. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015 (2015)

    Google Scholar 

  6. Loshchilov, I., Hutter, F.: Fixing weight decay regularization. In: Proceedings of Seventh International Conference on Learning Representations (ICLR 2019) (2019)

    Google Scholar 

  7. Maia, M., Sales, J.E., Freitas, A., Handschuh, S., Endres, M.: A comparative study of deep neural network models on multi-label text classification in finance. In: 15th IEEE International Conference on Semantic Computing, ICSC 2021, Laguna Hills, CA, USA, 27–29 January 2021. IEEE Computer Society (2021)

    Google Scholar 

  8. Mikolov, T., Sutskever, I., Chen, K., Corrado, G., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Proceedings of the 26th International Conference on Neural Information Processing Systems, NIPS 2013, USA, vol. 2, pp. 3111–3119 (2013)

    Google Scholar 

  9. Nakov, P., et al.: SemEval-2017 task 3: community question answering. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 27–48. Association for Computational Linguistics, Vancouver, August 2017

    Google Scholar 

  10. Nakov, P., et al.: SemEval-2016 task 3: community question answering. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pp. 525–545. Association for Computational Linguistics, San Diego, June 2016

    Google Scholar 

  11. Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, October 2014

    Google Scholar 

  12. Peters, M., et al.: Deep contextualized word representations. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2227–2237. Association for Computational Linguistics, New Orleans, June 2018

    Google Scholar 

  13. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In: NeurIPS EMC\(^{2}\) Workshop (2019)

    Google Scholar 

  14. Vaswani, A., et al.: Attention is all you need. CoRR (2017)

    Google Scholar 

  15. Wang, Z., Hamza, W., Florian, R.: Bilateral multi-perspective matching for natural language sentences. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI 2017, pp. 4144–4150. AAAI Press (2017)

    Google Scholar 

  16. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: XLNet: generalized autoregressive pretraining for language understanding. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. pp. 5753–5763 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Macedo Maia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Maia, M., Handschuh, S., Endres, M. (2021). A Tag-Based Transformer Community Question Answering Learning-to-Rank Model in the Home Improvement Domain. In: Strauss, C., Kotsis, G., Tjoa, A.M., Khalil, I. (eds) Database and Expert Systems Applications. DEXA 2021. Lecture Notes in Computer Science(), vol 12924. Springer, Cham. https://doi.org/10.1007/978-3-030-86475-0_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-86475-0_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-86474-3

  • Online ISBN: 978-3-030-86475-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics