skip to main content
10.1145/3582768.3582780acmotherconferencesArticle/Chapter ViewAbstractPublication PagesnlpirConference Proceedingsconference-collections
research-article

Responding to customer queries automatically by customer reviews’ based Question Answering

Published: 27 June 2023 Publication History

Abstract

The entire world has been undergoing its own digital transformation over the past few decades as technology has advanced in leaps and bounds. Following this, an increase in the number of people using digital platforms for buying products online likewise increases the number of questions or enquiries posted about a product on an online shopping platform like Amazon on a day to day basis. Though we have gone completely digital in posting these questions, the answering of these questions is still manual. The forums are rarely active. By the time the user gets an answer to his question, either he has bought that product already through offline means or has lost interest in buying that product since it is time consuming. Moreover, the questions which are asked are mostly repetitive. At times the answers are already out there since they have already been given to some other user who had asked the same question. Also, lot of answers are embedded in the user reviews. Therefore, the answers can be extracted from the existing product reviews. This may lead to increase in sale and greater customer satisfaction as his query is resolved in much lower response time. We have review-based question answering systems that aim at answering the questions from the reviews given on the product by other customers. However, the existing systems have certain drawbacks due to the use of RNN, like missing attention mechanism etc. In this work, we enhance the performance of the existing review based QA systems by carrying out some prototypical experiments with the basic models of NLP and then moving towards more advanced Language Models while identifying and rectifying the shortcomings of the existing model. Further, in this work a thorough comparative analysis of the models and approaches that have been worked on is presented. We have enhanced the current state of the art existing review QA systems by using BERT, BART and also applied various heuristics for comparison. We achieved the best BLEU score of 0.58 by using BERT, which is an improvement of 0.19 on the current existing system.

References

[1]
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The Long-Document Transformer. arXiv:2004.05150 (2020).
[2]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423
[3]
Dennis Diefenbach, Vanessa Lopez, Kamal Singh, and Pierre Maret. 2018. Core Techniques of Question Answering Systems over Knowledge Bases: A Survey. Knowl. Inf. Syst. 55, 3 (jun 2018), 529–569. https://doi.org/10.1007/s10115-017-1100-y
[4]
Krzysztof Fiok, Waldemar Karwowski, Edgar Gutierrez-Franco, Mohammad Reza Davahli, Maciej Wilamowski, Tareq Ahram, Awad Aljuaid, and Jozef Zurada. 2021. Text Guide: Improving the Quality of Long Text Classification by a Text Selection Method Based on Feature Importance. IEEE Access PP (07 2021), 1–1. https://doi.org/10.1109/ACCESS.2021.3099758
[5]
Natural Language Computing Group. 2017. R-NET: Machine Reading Comprehension with Self-matching Networks. https://www.microsoft.com/en-us/research/publication/mcr/
[6]
Mansi Gupta, Nitish Kulkarni, Raghuveer Chanda, Anirudha Rayasam, and Zachary C. Lipton. 2019. AmazonQA: A Review-Based Question Answering Task. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial Intelligence Organization, 4996–5002. https://doi.org/10.24963/ijcai.2019/694
[7]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Comput. 9, 8 (nov 1997), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
[8]
Oleksii Kuchaiev and Boris Ginsburg. 2017. Factorization tricks for LSTM networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. OpenReview.net. https://openreview.net/forum?id=ByxWXyNFg
[9]
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 7871–7880. https://doi.org/10.18653/v1/2020.acl-main.703
[10]
Chin-Yew Lin. 2004. ROUGE: a Package for Automatic Evaluation of Summaries. In Workshop on Text Summarization Branches Out, Post-Conference Workshop of ACL 2004, Barcelona, Spain. 74–81. https://www.microsoft.com/en-us/research/publication/rouge-a-package-for-automatic-evaluation-of-summaries/
[11]
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, 311–318. https://doi.org/10.3115/1073083.1073135
[12]
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 2383–2392. https://doi.org/10.18653/v1/D16-1264
[13]
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. CoRR abs/1701.06538(2017). arXiv:1701.06538http://arxiv.org/abs/1701.06538
[14]
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2 (Montreal, Canada) (NIPS’14). MIT Press, Cambridge, MA, USA, 3104–3112.
[15]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Vol. 30. Curran Associates, Inc.https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
[16]
Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Online, 483–498. https://doi.org/10.18653/v1/2021.naacl-main.41
[17]
Wenxuan Zhang, Wai Lam, Yang Deng, and Jing Ma. 2020. Review-Guided Helpful Answer Identification in E-Commerce. In Proceedings of The Web Conference 2020 (Taipei, Taiwan) (WWW ’20). Association for Computing Machinery, New York, NY, USA, 2620–2626. https://doi.org/10.1145/3366423.3380015

Cited By

View all
  • (2024)Natural language processing for analyzing online customer reviews: a survey, taxonomy, and open research challengesPeerJ Computer Science10.7717/peerj-cs.220310(e2203)Online publication date: 19-Jul-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
NLPIR '22: Proceedings of the 2022 6th International Conference on Natural Language Processing and Information Retrieval
December 2022
241 pages
ISBN:9781450397629
DOI:10.1145/3582768
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 June 2023

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

NLPIR 2022

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)9
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Natural language processing for analyzing online customer reviews: a survey, taxonomy, and open research challengesPeerJ Computer Science10.7717/peerj-cs.220310(e2203)Online publication date: 19-Jul-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media