skip to main content
10.1145/3338906.3340442acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Bridging the gap between ML solutions and their business requirements using feature interactions

Published:12 August 2019Publication History

ABSTRACT

Machine Learning (ML) based solutions are becoming increasingly popular and pervasive. When testing such solutions, there is a tendency to focus on improving the ML metrics such as the F1-score and accuracy at the expense of ensuring business value and correctness by covering business requirements. In this work, we adapt test planning methods of classical software to ML solutions. We use combinatorial modeling methodology to define the space of business requirements and map it to the ML solution data, and use the notion of data slices to identify the weaker areas of the ML solution and strengthen them. We apply our approach to three real-world case studies and demonstrate its value.

References

  1. K. Z. Bell. Optimizing Effectiveness and Efficiency of Software Testing: A Hybrid Approach. PhD thesis, North Carolina State University, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. K. Z. Bell and M. A. Vouk. On effectiveness of pairwise methodology for testing network-centric software. In 2005 International Conference on Information and Communication Technology, pages 221–235, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  3. E. Breck, S. Cai, E. Nielsen, M. Salib, and D. Sculley. What’s your ml test score? a rubric for ml production systems. In Reliable Machine Learning in the Wild - NIPS 2016 Workshop (2016), 2016.Google ScholarGoogle Scholar
  4. K. Burroughs, A. Jain, and R. Erickson. Improved quality of protocol testing through techniques of experimental design. In SUPERCOMM/ICC, pages 745–752, 1994.Google ScholarGoogle ScholarCross RefCross Ref
  5. M. B. Cohen, J. Snyder, and G. Rothermel. Testing across configurations: implications for combinatorial testing. SIGSOFT Softw. Eng. Notes, 31(6):1–9, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. S. R. Dalal, A. Jain, N. Karunanithi, J. M. Leaton, C. M. Lott, G. C. Patton, and B. M. Horowitz. Model-Based Testing in Practice. In ICSE, pages 285–294, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Bridging the Gap between ML Solutions and Their Business Requirements... ESEC/FSE ’19, August 26–30, 2019, Tallinn, EstoniaGoogle ScholarGoogle Scholar
  8. M. Grindal, B. Lindström, J. Offutt, and S. F. Andler. An evaluation of combination strategies for test case selection. Empirical Softw. Eng., 11(4):583–611, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. T. K. Ho. Random decision forests. In Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1, ICDAR ’95, pages 278–, Washington, DC, USA, 1995. IEEE Computer Society. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Text classification with movie reviews. TensorFlow, 2019.Google ScholarGoogle Scholar
  11. Kaggle. Kaggle imdb movie reviews dataset, 2018.Google ScholarGoogle Scholar
  12. W. Kan. Kaggle lending club loan data, 2016.Google ScholarGoogle Scholar
  13. D. R. Kuhn, R. N. Kacker, and Y. Lei. Introduction to Combinatorial Testing. Chapman & Hall/CRC, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. D. R. Kuhn and M. J. Reilly. An investigation of the applicability of design of experiments to software testing. 27th NASA/IEEE Software Engineering Workshop, NASA Goddard Space Flight Center, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. D. R. Kuhn, D. R. Wallace, and A. M. Gallo, Jr. Software fault interactions and implications for software testing. IEEE Trans. Softw. Eng., 30(6):418–421, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. S. Masuda, H. Nakamura, and K. Kajitani. Rule-based searching for collision test cases of autonomous vehicles simulation. IET Intelligent Transport Systems, 12:1088–1095(7), 2018.Google ScholarGoogle ScholarCross RefCross Ref
  17. A. Ng. Ai transformation playbook how to lead your company into the ai era. Landing AI, 2018.Google ScholarGoogle Scholar
  18. D. M. W. Powers. Evaluation: From precision, recall and f-measure to roc., informedness, markedness & correlation. Journal of Machine Learning Technologies, 2(1):37–63, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  19. D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, and M. Young. Machine learning: The high interest credit card of technical debt. In SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop), 2014.Google ScholarGoogle Scholar
  20. Linearsvc. SKlearn documentation, 2019.Google ScholarGoogle Scholar
  21. D. R. Wallace and D. R. Kuhn. Failure modes in medical device software: an analysis of 15 years of recall data. In ACS/ IEEE International Conference on Computer Systems and Applications, pages 301–311, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  22. A. W. Williams. Determination of test configurations for pair-wise interaction coverage. In TestCom, pages 59–74, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. P. Wojciak and R. Tzoref-Brill. System Level Combinatorial Testing in Practice – The Concurrent Maintenance Case Study. In ICST, pages 103–112, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Z. Zhang and J. Zhang. Characterizing failure-causing parameter interactions by adaptive testing. In Proceedings of the 2011 International Symposium on Software Testing and Analysis, ISSTA ’11, pages 331–341, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. M. Zinkevich. Rules of machine learning: Best practices for ml engineering. Google, 2018.Google ScholarGoogle Scholar

Index Terms

  1. Bridging the gap between ML solutions and their business requirements using feature interactions

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      ESEC/FSE 2019: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering
      August 2019
      1264 pages
      ISBN:9781450355728
      DOI:10.1145/3338906

      Copyright © 2019 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 August 2019

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate112of543submissions,21%

      Upcoming Conference

      FSE '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader