Skip to main content

Automatically Classifying Kano Model Factors in App Reviews

  • Conference paper
  • First Online:
Requirements Engineering: Foundation for Software Quality (REFSQ 2023)

Abstract

[Context and motivation] Requirements assessment by means of the Kano model is common practice. As suggested by the original authors, these assessments are done by interviewing stakeholders and asking them about the level of satisfaction if a certain feature is well implemented and the level of dissatisfaction if a feature is not or not well implemented. [Question/problem] Assessments via interviews are time-consuming, expensive, and can only capture the opinion of a limited set of stakeholders. [Principal ideas/results] We investigate the possibility to extract Kano model factors (basic needs, performance factors, delighters, irrelevant) from a large set of user feedback (i.e., app reviews). We implemented, trained, and tested several classifiers on a set of 2,592 reviews. In a 10-fold cross-validation, a BERT-based classifier performed best with an accuracy of 92.8%. To assess the classifiers’ generalization, we additionally tested them on another independent set of 1,622 app reviews. The accuracy of the best classifier dropped to 72.5%. We also show that misclassifications correlate with human disagreement on the labels. [Contribution] Our approach is a lightweight and automated alternative for identifying Kano model factors from a large set of user feedback. The limited accuracy of the approach is an inherent problem of missing information about the context in app reviews compared to comprehensive interviews, which also makes it hard for humans to extract the factors correctly.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://doi.org/10.6084/m9.figshare.21618858.

  2. 2.

    We used the Simple Transformers library: https://github.com/ThilinaRajapakse/simpletransformers.

  3. 3.

    Random undersampling deletes examples from the majority class randomly until all classes have equally many samples.

References

  1. Achimugu, P., Selamat, A., Ibrahim, R., Mahrin, M.N.: A systematic literature review of software requirements prioritization research. Inf. Softw. Technol. 56(6), 568–585 (2014). https://doi.org/10.1016/j.infsof.2014.02.001

    Article  Google Scholar 

  2. AlAmoudi, N., Baslyman, M., Ahmed, M.: Extracting attractive app aspects from app reviews using clustering techniques based on kano model. In: IEEE International Requirements Engineering Conference Workshops (REW). IEEE (2022). https://doi.org/10.1109/REW56159.2022.00030

  3. Brunotte, W.: App Store Rev. (2022). https://doi.org/10.5281/zenodo.7319510

    Article  Google Scholar 

  4. Bukhsh, F.A., Bukhsh, Z.A., Daneva, M.: A systematic literature review on requirement prioritization techniques and their empirical evaluation. Comput. Standards Interfaces 69, 103389 (2020). https://doi.org/10.1016/j.csi.2019.103389

  5. Chung, H.W., FĂ©vry, T., Tsai, H., Johnson, M., Ruder, S.: Rethinking embedding coupling in pre-trained language models. In: 9th International Conference on Learning Representations (ICLR). OpenReview.net (2021)

    Google Scholar 

  6. Dalpiaz, F., Ferrari, A., Franch, X., Palomares, C.: Natural language processing for requirements engineering: The best is yet to come. IEEE Softw. 35(5), 115–119 (2018). https://doi.org/10.1109/ms.2018.3571242

    Article  Google Scholar 

  7. Dell’Anna, D., Aydemir, F.B., Dalpiaz, F.: Evaluating classifiers in SE research: the ECSER pipeline and two replication studies. Empirical Softw. Eng. 28(1), (2022). https://doi.org/10.1007/s10664-022-10243-1

  8. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics (2019). https://doi.org/10.18653/v1/N19-1423

  9. Fischbach, J., et al.: Automatic creation of acceptance tests by extracting conditionals from requirements: NLP approach and case study. J. Syst. Softw. 197, 11159 (2023). https://doi.org/10.1016/j.jss.2022.111549

  10. Groen, E.C., et al.: The crowd in requirements engineering: the landscape and challenges. IEEE Softw. 34(2), 44–52 (2017). https://doi.org/10.1109/ms.2017.33

    Article  Google Scholar 

  11. Guzman, E., Maalej, W.: How do users like this feature? a fine grained sentiment analysis of app reviews. In: IEEE 22nd International Requirements Engineering Conference (RE) (2014). https://doi.org/10.1109/re.2014.6912257

  12. Henao, P.R., Fischbach, J., Spies, D., Frattini, J., Vogelsang, A.: Transfer learning for mining feature requests and bug reports from tweets and app store reviews. In: IEEE International Requirements Engineering Conference Workshops (REW). IEEE (2021). https://doi.org/10.1109/rew53955.2021.00019

  13. Herrmann, A., Daneva, M.: Requirements prioritization based on benefit and cost prediction: An agenda for future research. In: IEEE International Requirements Engineering Conference (RE) (2008). https://doi.org/10.1109/re.2008.48

  14. Herzberg, F., Mausner, B., Snyderman, B.: The motivation to work. Transaction Pub (1993)

    Google Scholar 

  15. Hey, T., Keim, J., Koziolek, A., Tichy, W.F.: NoRBERT: Transfer learning for requirements classification. In: IEEE International Requirements Engineering Conference (RE) (2020). https://doi.org/10.1109/re48521.2020.00028

  16. Hujainah, F., Bakar, R.B.A., Abdulgabber, M.A., Zamli, K.Z.: Software requirements prioritisation: a systematic literature review on significance, stakeholders, techniques and challenges. IEEE Access 6, 71497–71523 (2018). https://doi.org/10.1109/access.2018.2881755

    Article  Google Scholar 

  17. Kano, N., Seraku, N., Takahashi, F., Tsuji, S.: Attractive quality and must-be quality. J. Japanese Society Qual. Contr. 14(2), 147–156 (1984)

    Google Scholar 

  18. Lan, Z., et al.: A lite bert for self-supervised learning of language representations (2019)

    Google Scholar 

  19. Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. Biometrics 33(1), 159–174 (1977). https://doi.org/10.2307/2529310

  20. Lee, H., Cha, M.S., Kim, T.: Text mining-based mapping for kano quality factor. ICIC Express Letters. Part B, Applications: an International J. Res. Surv. 12(2), 185–191 (2021)

    Google Scholar 

  21. Lim, S., Henriksson, A., Zdravkovic, J.: Data-driven requirements elicitation: a systematic literature review. SN Comput. Sci. 2, 16 (2021)

    Article  Google Scholar 

  22. Liu, Y., et al.: Roberta: A robustly optimized bert pretraining approach. ArXiv abs/1907.11692 (2019)

    Google Scholar 

  23. Maalej, W., Kurtanović, Z., Nabil, H., Stanik, C.: On the automatic classification of app reviews. Requirements Eng. 21(3), 311–331 (2016). https://doi.org/10.1007/s00766-016-0251-9

    Article  Google Scholar 

  24. Maalej, W., Nabil, H.: Bug report, feature request, or simply praise? on automatically classifying app reviews. In: IEEE 23rd International Requirements Engineering Conference (RE) (2015). https://doi.org/10.1109/re.2015.7320414

  25. Maalej, W., Nayebi, M., Johann, T., Ruhe, G.: Toward data-driven requirements engineering. IEEE Softw. 33(1), 48–54 (2016). https://doi.org/10.1109/ms.2015.153

    Article  Google Scholar 

  26. Nayebi, M., Cho, H., Farrahi, H., Ruhe, G.: App store mining is not enough. In: IEEE/ACM International Conference on Software Engineering Companion (ICSE-C) (2017). https://doi.org/10.1109/icse-c.2017.77

  27. Pagano, D., Maalej, W.: User feedback in the appstore: An empirical study. In: IEEE International Requirements Engineering Conference (RE) (2013). https://doi.org/10.1109/re.2013.6636712

  28. Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  29. Sainani, A., Anish, P.R., Joshi, V., Ghaisas, S.: Extracting and classifying requirements from software engineering contracts. In: IEEE International Requirements Engineering Conference (RE). IEEE (2020). https://doi.org/10.1109/re48521.2020.00026

  30. Stanik, C., Haering, M., Maalej, W.: Classifying multilingual user feedback using traditional machine learning and deep learning. In: IEEE International Requirements Engineering Conference Workshops (REW), pp. 220–226 (2019). https://doi.org/10.1109/REW.2019.00046

  31. Wang, C., Daneva, M., van Sinderen, M., Liang, P.: A systematic mapping study on crowdsourced requirements engineering using user feedback. J. Softw.: Evol. Process 31(10), e2199 (2019). https://doi.org/10.1002/smr.2199

    Article  Google Scholar 

  32. Winkler, J., Vogelsang, A.: Automatic classification of requirements based on convolutional neural networks. In: IEEE International Requirements Engineering Conference Workshops (REW) (2016). https://doi.org/10.1109/rew.2016.021

  33. Winkler, J.P., Vogelsang, A.: Using tools to assist identification of non-requirements in requirements specifications – a controlled experiment. In: Requirements Engineering: Foundation for Software Quality (REFSQ), pp. 57–71. Springer International Publishing (2018). https://doi.org/10.1007/978-3-319-77243-1_4

  34. Wouters, J., Menkveld, A., Brinkkemper, S., Dalpiaz, F.: Crowdbased requirements elicitation via pull feedback: method and case studies. In: Requirements Engineering. Requirements Engineering (2022). https://doi.org/10.1007/s00766-022-00384-6

Download references

Acknowledgements

We want to thank the authors of the two datasets for permission to use parts of their dataset and the permission to publish our labeled dataset. We also want to thank Murat Sancak for his initial work on the topic in his Bachelor’s thesis.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Vogelsang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Binder, M., Vogt, A., Bajraktari, A., Vogelsang, A. (2023). Automatically Classifying Kano Model Factors in App Reviews. In: Ferrari, A., Penzenstadler, B. (eds) Requirements Engineering: Foundation for Software Quality. REFSQ 2023. Lecture Notes in Computer Science, vol 13975. Springer, Cham. https://doi.org/10.1007/978-3-031-29786-1_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-29786-1_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-29785-4

  • Online ISBN: 978-3-031-29786-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics