Abstract
As software becomes more important and ubiquitous, high quality software also becomes crucial. We depend on software developers who write the software to also maintain and improve its quality. When developers make changes to software, they rely on continuous integration [6] and regression testing [15] to check that changes do not break existing functionality. Continuous integration (CI) automates the process of building and testing software after every change. The process of running tests on the code after every change is known as regression testing. The goal of regression testing is to allow developers to detect and fix faults early on, ideally the moment the faults are introduced. Regression testing is widely used in both industry and open source, but regression testing suffers from two main challenges: (1) regression testing is costly, and (2) regression test suites often contain flaky tests.
- J. Bell, O. Legunsen, M. Hilton, L. Eloussi, T. Yung, and D. Marinov. "DeFlaker: Automatically detecting flaky tests." In ICSE, pp. 433--444, 2018. Google ScholarDigital Library
- H. Esfahani, J. Fietz, Q. Ke, A. Kolomiets, E. Lan, E. Mavrinac, W. Schulte, N. Sanches, and S. Kandula. "CloudBuild: Microsoft's distributed and caching build service." in ICSE, pp. 11--20, 2016. Google ScholarDigital Library
- M. Gligoric, L. Eloussi, and D. Marinov. "Practical regression test selection with dynamic file dependencies." In ISSTA, pp. 211--222, 2015. Google ScholarDigital Library
- M. Harman and P. O'Hearn. "From start-ups to scale-ups: Opportunities and open problems for static and dynamic program analysis." In SCAM, pp. 1--23, 2018.Google Scholar
- K. Herzig, M. Greiler, J. Czerwonka, and B. Murphy. "The art of testing less without sacrificing quality." In ICSE, pp. 483--493, 2015. Google ScholarDigital Library
- M. Hilton, N. Nelson, T. Tunnell, D. Marinov, and D. Dig. "Tradeoffs in continuous integration: Assurance, security, and flexibility." In ESEC/FSE, pp. 197--207, 2017. Google ScholarDigital Library
- W. Lam, R. Oei, A. Shi, D. Marinov, and T. Xie. "iDFlakies: A framework for detecting and partially classifying flaky tests." In ICST, pp. 312--322, 2019.Google Scholar
- Q. Luo, F. Hariri, L. Eloussi, and D. Marinov. "An empirical analysis of flaky tests." In FSE, pp. 643--653, 2014. Google ScholarDigital Library
- A. Memon, Z. Gao, B. Nguyen, S. Dhanda, E. Nickell, R. Siemborski, and J. Micco. "Taming Google-scale continuous testing." In ICSE-SEIP, pp. 233--242, 2017. Google ScholarDigital Library
- G. Rothermel and M. J. Harrold. "A safe, efficient regression test selection technique." TOSEM, vol. 6, no. 2, pp. 173--210, 1997. Google ScholarDigital Library
- A. Shi, A. Gyori, M. Gligoric, A. Zaytsev, and D. Marinov. "Balancing trade-offs in test-suite reduction." In FSE, pp. 246--256, 2014. Google ScholarDigital Library
- A. Shi, A. Gyori, S. Mahmood, P. Zhao, and D. Marinov. "Evaluating test-suite reduction in real software evolution." In ISSTA, pp. 84--94, 2018. Google ScholarDigital Library
- A. Shi, W. Lam, R. Oei, T. Xie, and D. Marinov. "iFixFlakies: A framework for automatically fixing order-dependent flaky tests." In ESEC/FSE, pp. 545--555, 2019. Google ScholarDigital Library
- A. Shi, S. Thummalapenta, S. K. Lahiri, N. Bjørner, and J. Czerwonka. "Optimizing test placement for module-level regression testing." In ICSE, pp. 689--699, 2017 Google ScholarDigital Library
- S. Yoo and M. Harman. "Regression testing minimization, selection and prioritization: A survey." TSE, vol. 22, no. 2, pp. 67--120, 2012. Google ScholarDigital Library
- A. Zeller and R. Hildebrandt. "Simplifying and isolating failureinducing input." TSE, vol. 28, no. 2, pp. 183--200, 2002. Google ScholarDigital Library
- S. Zhang, D. Jalali, J. Wuttke, K. Mulu, W. Lam, M. D. Ernst, and D. Notkin. "Empirically revisiting the test independence assumption." In ISSTA, pp. 385--396, 2014. Google ScholarDigital Library
Index Terms
- SIGSOFT Outstanding Doctoral Dissertation Award: Improving Regression Testing Efficiency and Reliability via Test-Suite Transformations
Recommendations
ACM SIGSOFT Impact Paper Award 2012: Systematic Software Testing: The Korat Approach
FSE '12: Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software EngineeringAt ISSTA 2002, the three authors (then Ph.D. students) published the paper "Korat: Automated Testing Based on Java Predicates", which won one of the first ACM SIGSOFT Distinguished paper awards. In 2012, the paper won the ACM SIGSOFT Impact Paper Award. ...
Mining change history for test-plan generation (doctoral symposium)
ISSTA 2015: Proceedings of the 2015 International Symposium on Software Testing and AnalysisRegression testing is an essential step in safeguarding the evolution of a system, yet there is often not enough time to exercise all available tests. Identifying the subset of tests that can reveal potential issues introduced by a change is a ...
Collaborative testing across shared software components (doctoral symposium)
ISSTA 2015: Proceedings of the 2015 International Symposium on Software Testing and AnalysisComponents of numerous component-based software systems are often developed and tested by multiple stakeholders, and there often exists significant overlap and synergy in the processes of testing systems with shared components. This work demonstrates ...
Comments