Skip to main content

Near Failure Analysis Using Dynamic Behavioural Data

  • Conference paper
  • First Online:
Product-Focused Software Process Improvement (PROFES 2022)

Abstract

Automated testing is a safeguard against software regression and provides huge benefits. However, it is yet a challenging subject. Among others, there is a risk that the test cases are too specific, thus making them inefficient. There are many forms of undesirable behaviour that are compatible with a typical program’s specification, that however, harm users. An efficient test should provide most possible information in relation to the resources spent. This paper introduces near failure analysis which complements testing activities by analysing dynamic behavioural metrics (e.g., execution time) in addition to explicit output values. The approach employs machine learning (ML) for classifying the behaviour of a program as faulty or healthy based on dynamic data gathered throughout its executions over time. An ML-based model is designed and trained to predict whether or not an arbitrary version of a program is at risk of failure. The very preliminary evaluation demonstrates promising results for feasibility and effectiveness of near failure analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aggarwal, K.K., Singh, Y., Kaur, A., Sangwan, O.P.: A neural net based approach to test oracle. SIGSOFT Softw. Eng. Notes 29(3), 1–6 (2004). https://doi.org/10.1145/986710.986725

    Article  Google Scholar 

  2. Almaghairbe, R., Roper, M.: Separating passing and failing test executions by clustering anomalies. Softw. Qual. J. 25(3), 803–840 (2016). https://doi.org/10.1007/s11219-016-9339-1

    Article  Google Scholar 

  3. Ammann, P., Offutt, J.: Introduction to Software Testing, 2nd edn. Cambridge University Press, Cambridge (2016)

    Book  Google Scholar 

  4. Arthur, D., Vassilvitskii, S.: K-means++: the advantages of careful seeding. In: Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 1027–1035. Society for Industrial and Applied Mathematics, USA (2007). https://doi.org/10.1145/1283383.1283494

  5. Barr, E.T., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41(5), 507–525 (2015). https://doi.org/10.1109/TSE.2014.2372785

    Article  Google Scholar 

  6. Bornø Jensen, M., et al.: A framework for automated traffic safety analysis from video using modern computer vision. In: Transportation Research Board Annual Meeting (2019)

    Google Scholar 

  7. Bowring, J.F., Rehg, J.M., Harrold, M.J.: Active learning for automatic classification of software behavior. SIGSOFT Softw. Eng. Notes 29(4), 195–205 (2004). https://doi.org/10.1145/1013886.1007539

    Article  Google Scholar 

  8. Briand, L.C.: Novel applications of machine learning in software testing. In: Proceedings of the 8th International Conference on Quality Software, pp. 3–10 (2008). https://doi.org/10.1109/QSIC.2008.29

  9. Cai, J., Luo, J., Wang, S., Yang, S.: Feature selection in machine learning: a new perspective. Neurocomputing 300, 70–79 (2018). https://doi.org/10.1016/j.neucom.2017.11.077

    Article  Google Scholar 

  10. Hassan, A.E.: Predicting faults using the complexity of code changes. In: Proceedings of the 31st International Conference on Software Engineering, pp. 78–88 (2009). https://doi.org/10.1109/ICSE.2009.5070510

  11. Jin, H., Wang, Y., Chen, N.W., Gou, Z.J., Wang, S.: Artificial neural network for automatic test oracles generation. In: Proceedings of the International Conference on Computer Science and Software Engineering, vol. 2, pp. 727–730 (2008). https://doi.org/10.1109/CSSE.2008.774

  12. Just, R., Jalali, D., Ernst, M.D.: Defects4j: a database of existing faults to enable controlled testing studies for java programs. In: Proceedings of the 2014 International Symposium on Software Testing and Analysis, pp. 437–440. ACM, USA (2014). https://doi.org/10.1145/2610384.2628055

  13. Langdon, W.B., Yoo, S., Harman, M.: Inferring automatic test oracles. In: Proceedings of the 10th International Workshop on Search-Based Software Testing, pp. 5–6 (2017). https://doi.org/10.1109/SBST.2017.1

  14. Liaw, A., Wiener, M.: Classification and regression by random forest. R News 2(3), 18–22 (2002)

    Google Scholar 

  15. McCabe, T.J.: A complexity measure. IEEE Trans. Softw. Eng. SE 2(4), 308–320 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  16. Pradel, M., Sen, K.: Deepbugs: a learning approach to name-based bug detection. Proc. ACM Program. Lang. 2(OOPSLA) (2018). https://doi.org/10.1145/3276517

  17. Reichenbach, C.: Software ticks need no specifications. In: Proceedings of the 43rd International Conference on Software Engineering: New Ideas and Emerging Results, pp. 61–65. IEEE Press (2021). https://doi.org/10.1109/ICSE-NIER52604.2021.00021

  18. Tsimpourlas, F., Rajan, A., Allamanis, M.: Supervised learning over test executions as a test oracle. In: Proceedings of the 36th Annual ACM Symposium on Applied Computing, pp. 1521–1531. ACM, USA (2021). https://doi.org/10.1145/3412841.3442027

  19. Vanmali, M., Last, M., Kandel, A.: Using a neural network in the software testing process. Int. J. Intell. Syst. 17, 45–62 (2002). https://doi.org/10.1002/int.1002

    Article  MATH  Google Scholar 

  20. Walunj, V., Gharibi, G., Alanazi, R., Lee, Y.: Defect prediction using deep learning with network portrait divergence for software evolution. Empir. Softw. Eng. 27(5), 118 (2022). https://doi.org/10.1007/s10664-022-10147-0

    Article  Google Scholar 

  21. Wang, S., Liu, T., Nam, J., Tan, L.: Deep semantic feature learning for software defect prediction. IEEE Trans. Softw. Eng. 46(12), 1267–1293 (2020). https://doi.org/10.1109/TSE.2018.2877612

    Article  Google Scholar 

Download references

Acknowledgements

The work is funded by ELLIIT strategic research area (https://elliit.se), project A19 Software Regression Testing with Near Failure Assertions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masoumeh Taromirad .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Taromirad, M., Runeson, P. (2022). Near Failure Analysis Using Dynamic Behavioural Data. In: Taibi, D., Kuhrmann, M., Mikkonen, T., Klünder, J., Abrahamsson, P. (eds) Product-Focused Software Process Improvement. PROFES 2022. Lecture Notes in Computer Science, vol 13709. Springer, Cham. https://doi.org/10.1007/978-3-031-21388-5_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21388-5_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21387-8

  • Online ISBN: 978-3-031-21388-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics