Abstract
Running unit tests suites with contemporary tools such as JUnit can show the presence of bugs, but not their locations. This is different from checking a program with a compiler, which always points the programmer to the most likely causes of the errors it detects. We argue that there is enough information in test suites and the programs under test to exclude many locations in the source as reasons for the failure of test cases, and further to rank the remaining locations according to derived evidence of their faultiness. We present a framework for the management of fault locators whose error diagnoses are based on data about a program and its test cases, especially as collected during test runs, and demonstrate that it is capable of performing reasonably well using a couple of simple fault locators in different evaluation scenarios.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Agrawal, H., Horgan, J.R.: Dynamic program slicing. In: Proceedings of the ACM SIGPLAN 1990 Conference on Programming Language Design and Implementation, pp. 246–256 (1990)
Apache Commons Codec, http://commons.apache.org/codec/changes-report.html
Basili, V.R., Briand, L.C., Melo, W.L.: A validation of object-oriented design metrics as quality indicators. IEEE Trans. Software Eng. 22(10), 751–761 (1996)
Bouillon, P., Krinke, J., Meyer, N., Steimann, F.: EzUnit: A framework for associating failed unit tests with potential programming errors. In: Concas, G., Damiani, E., Scotto, M., Succi, G. (eds.) XP 2007. LNCS, vol. 4536, Springer, Heidelberg (2007)
Cheon, Y., Leavens, G.T.: A simple and practical approach to unit testing: The JML and JUnit way. In: Magnusson, B. (ed.) ECOOP 2002. LNCS, vol. 2374, pp. 231–255. Springer, Heidelberg (2002)
Cleve, H., Zeller, A.: Locating causes of program failures. In: Inverardi, P., Jazayeri, M. (eds.) ICSE 2005. LNCS, vol. 4309, pp. 342–351. Springer, Heidelberg (2006)
Continuous Testing, http://groups.csail.mit.edu/pag/continuoustesting/
DDChange, http://ddchange.martin-burger.de/
Dubois, D., Prade, H.: Possibility theory, probability theory and multiple-valued logics: a clarification. Annals of Mathematics and Artificial Intelligence 32, 1–4, 35–66 (2001)
Eclipse Test & Performance Tools Platform Project, http://www.eclipse.org/tptp/
Eichstädt-Engelen, T.: Integration von Tracing in ein Framework zur Verknüpfung von ge-scheiterten Unit-Tests mit Fehlerquellen (Bachelor-Arbeit, Fernuniversität in Hagen, 2008)
Harold, E.R.: Test your tests with Jester, www.ibm.com/developerworks/java/library/j-jester/
Hassan, A.E., Holt, R.C.: The top ten list: Dynamic fault prediction. In: ICSM, pp. 263–272 (2005)
Jones, J.A., Harrold, M.J., Stasko, J.T.: Visualization of test information to assist fault localization. In: ICSE, pp. 467–477 (2002)
Kim, S., Zimmermann, T., Whitehead Jr., E.J., Zeller, A.: Predicting faults from cached history. In: ICSE, pp. 489–498 (2007)
Korel, B., Laski, J.: Dynamic program slicing. Information Processing Letters 29(3), 155–163 (1998)
Lever, S.: Eclipse platform integration of Jester — The JUnit test tester. In: Baumeister, H., Marchesi, M., Holcombe, M. (eds.) XP 2005. LNCS, vol. 3556, pp. 325–326. Springer, Heidelberg (2005)
McCabe, T.J.: A complexity measure. IEEE TSE 2(4), 308–320 (1976)
Meyer, N.: Ein Eclipse-Framework zur Markierung von logischen Fehlern im Quellcode (Master-Arbeit, Fernuniversität in Hagen, 2007)
Nagappan, N., Ball, T., Zeller, A.: Mining metrics to predict component failures. In: ICSE, pp. 452–461 (2006)
Parasoft Corp Using Design by Contract to Automate Java Software and Component Testing Technical Paper (Parasoft, 2002)
Ren, X., Shah, F., Tip, F., Ryder, B.G., Chesley, O.: Chianti: A tool for change impact analysis of Java programs. In: OOPSLA, pp. 432–448 (2004)
Saff, D., Ernst, M.D.: Reducing wasted development time via continuous testing. In: ISSRE 2003, 14th International Symposium on Software Reliability Engineering, pp. 281–292 (2003)
Saff, D., Ernst, M.D.: An experimental evaluation of continuous testing during development. In: ISSTA 2004, International Symposium on Software Testing and Analysis, pp. 76–85 (2004)
Schaaf, M.: Integration und Evaluation von Verfahren zur Bestimmung wahrscheinlicher Ur-sachen des Scheiterns von Unit-Tests in Eclipse (Bachelor-Arbeit, Fernuniv Hagen, 2008)
Schröter, A., Zimmermann, T., Zeller, A.: Predicting component failures at design time. In: ISESE, pp. 18–27 (2006)
Störzer, M., Ryder, B.G., Ren, X., Tip, F.: Finding failure-inducing changes in Java programs using change classification. In: SIGSOFT 2006/FSE-14, pp. 57–68 (2006)
Zhang, X., Gupta, N., Gupta, R.: A study of effectiveness of dynamic slicing in locating real faults. Empirical Software Engineering 12(2), 143–160 (2007)
Zhang, X., Gupta, R., Zhang, Y.: Cost and precision tradeoffs of dynamic slicing algorithms. ACM TOPLAS 27(4), 631–661 (2005)
Zeller, A.: Yesterday, my program worked. Today, it does not. Why? In: ESEC / SIGSOFT FSE, pp. 253–267 (1999)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Steimann, F., Eichstädt-Engelen, T., Schaaf, M. (2008). Towards Raising the Failure of Unit Tests to the Level of Compiler-Reported Errors. In: Paige, R.F., Meyer, B. (eds) Objects, Components, Models and Patterns. TOOLS EUROPE 2008. Lecture Notes in Business Information Processing, vol 11. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-69824-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-540-69824-1_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-69823-4
Online ISBN: 978-3-540-69824-1
eBook Packages: Computer ScienceComputer Science (R0)