skip to main content
10.1145/1159733.1159761acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
Article

Maximising the information gained from an experimental analysis of code inspection and static analysis for concurrent java components

Published: 21 September 2006 Publication History

Abstract

The results of empirical studies are limited to particular contexts, difficult to generalise and the studies themselves are expensive to perform. Despite these problems, empirical studies in software engineering can be made effective and they are important to both researchers and practitioners. The key to their effectiveness lies in the maximisation of the information that can be gained by examining existing studies, conducting power analyses for an accurate minimum sample size and benefiting from previous studies through replication. This approach was applied in a controlled experiment examining the combination of automated static analysis tools and code inspection in the context of verification and validation (V&V) of concurrent Java components. The combination of these V&V technologies was shown to be cost-effective despite the size of the study, which thus contributes to research in V&V technology evaluation.

References

[1]
Andrews, G. Concurrent Programming: Principles and Practice. Addison-Wesley, 1991.]]
[2]
Artho, C. Finding faults in multi-threaded programs, Federal Institute of Technology, Zurich-Austin, 2001.]]
[3]
Basili, V.R. and Selby, R.W. Comparing the Effectiveness of Software Testing Strategies. IEEE Transactions in Software Engineering, 13, 12 (1987), 1278--1296.]]
[4]
Basili, V.R., Selby, R.W. and Hutchens, D.H. Experimentation in Software Engineering. IEEE Transactions in Software Engineering, 12, 7 (1986), 733--743.]]
[5]
Basili, V.R., Shull, F. and Lanubile, F. Building Knowledge through Families of Experiments. IEEE Transactions in Software Engineering, 25, 4 (1999), 456--473.]]
[6]
Carver, J., Voorhis, J.V. and Basili, V., Understanding the Impact of Assumptions on Experimental Validity. In Proceedings of the 2004 International Symposium on Empirical Software Engineering, (2004), 251--260.]]
[7]
Carver, R.H. and Tai, K. C. Use of Sequencing Constraints for Specification-Based Testing of Concurrent Programs. IEEE Transactions in Software Engineering, 24, 6 (1998), 471--490.]]
[8]
Coakes, S.J. and Steed, L.G. SPSS: Analysis without Anguish (Version 10.0 for Windows). John Wiley and Sons, 2001.]]
[9]
Cohen, J. Statistical Power Analysis for the Behavioural Sciences. Lawrence Erlbaum Associates, 1988.]]
[10]
Corbett, J.C. Evaluating Deadlock Detection Methods for Concurrent Software. IEEE Transactions in Software Engineering, 22, 3 (1996), 161--180.]]
[11]
Do, H., Elbaum, S. and Rothermel, G. Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and its Potential Impact. Empirical Software Engineering, 10, (2005), 405--435.]]
[12]
Dyba, T., Kampenes, V.B. and Sjøberg, D.I.K. A systematic review of statistical power in software engineering experiments. Journal of Information and Software Technology, 48, 8 (2006), 745--755.]]
[13]
Eytani, Y., Havelund, K., Stoller, S.D. and Ur, S. Toward a Framework and Benchmark for Testing Tools for Multi-Threaded Programs. To appear in Concurrency and Computation: Practice and Experience, (2006).]]
[14]
Flanagan, C. and Freund, S.N., Atomizer: A Dynamic Atomicity Checker for Multithreaded Programs. In Proceedings of the 31st ACM SIGPLAN-SIGACT, (2004), 256--267.]]
[15]
Hallal, H.H., Alikacem, E., Tunney, W.P., Boroday, S. and Petrenko, A., Antipattern-based Detection of Deficiencies in Java Multithreaded Software. In Proceedings of the 4th International Conference on Quality Software (QSIC), (2004), 258--267.]]
[16]
Hetzel, W.C. An experimental analysis of program verification methods, University of North Carolina, 1976.]]
[17]
Hovemeyer, D. and Pugh, W., Finding Concurrency bugs in Java. In Proceedings of the 23rd Annual ACM SIGACTSIGOPS Symposium on Principles of Distributed Computing (PODC 2004) Workshop on Concurrency and Programs, (2004).]]
[18]
Howell, D.C. Statistical Methods for Psychology. Wadsworth Publishing Compnay, 1997.]]
[19]
Jedlitschka, A. and Ciolkowski, M., Towards Evidence in Software Engineering. In Proceedings of the 2004 International Symposium on Empirical Software Engineering (ISESE'04), (2004), 261--270.]]
[20]
Jedlitschka, A. and Pfahl, D., Reporting Guidelines for Controlled Experiments in Software Engineering. In Proceedings of the 2005 International Symposium on Empirical Software Engineering (ISESE'05), (2005), 95--104.]]
[21]
Jeffery, R. and Scott, L., Has Twenty-five Years of Empirical Software Engineering Made a Difference? In Proceedings of the Ninth Asia-Pacific Software Engineering Conference (APSEC'02), (2002), 539--546.]]
[22]
Juristo, N., Moreno, A.M. and Vegas, S., A Survey on Testing Technique Empirical Studies: How Limited is our Knowledge. In Proceedings of the 2002 International Symposium on Empirical Software Engineering (ISESE'02), (2002), 161--172.]]
[23]
Kamsties, E. and Lott, C.M., An Empirical Evaluation of Three Defect-Detection Techniques. In Proceedings of the Fifth European Software Engineering Conference, (1995), 1--22.]]
[24]
Kitchenham, B.A., The Case Against Software Benchmarking, Keynote Lecture. In Proceedings of The European Software Measurement Conference (FESMA-DASMA 2001), (2001), 1--9.]]
[25]
Kitchenham, B.A. Procedures for Performing Systematic Reviews, Keele University, 2004.]]
[26]
Kitchenham, B.A., Linkman, S.G. and Fry, J.S., Experimenter induced distortions in empirical software engineering. In Proceedings of 2nd International Workshop on Empirical Software Engineering (WSESE), (2003), 7--15.]]
[27]
Kitchenham, B.A., Pfleeger, S.L. and Fenton, N. Towards a Framework for Software Measurement Validation. IEEE Transactions in Software Engineering, 21, 12 (1995), 929--944.]]
[28]
Kitchenham, B.A., Pfleeger, S.L., Pickard, L., Jones, P., Hoaglin, D., Emam, K.E. and Rosenberg, J. Preliminary Guidelines for Empirical Research in Software Engineering. IEEE Transactions in Software Engineering, 28, 8 (2002), 721--734.]]
[29]
Lea, D. Overview of package util.concurrent Release 1.3.4. Available online at http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/intro.html.]]
[30]
Lindsay, R.M. and Ehrendberg, A.S.C. The Design of Replicated Studies. The American Statistician, 47, 3 (1993), 217--228.]]
[31]
Long, B., Duke, R., Goldson, D., Strooper, P. and Wildman, L., Mutation-Based Evaluation of a Method for Verifying Concurrent Java Components. In Proceedings of the 2nd International Workshop on Parallel and Distributed Systems: Testing and Debugging (PADTAD), (2004).]]
[32]
Long, B., Strooper, P. and Hoffman, D. Tool Support for Testing Concurrent Java Components. IEEE Transactions in Software Engineering, 29, 6 (2003), 555--566.]]
[33]
Long, B., Strooper, P. and Wildman, L. A Method for Verifying Concurrent Java Components. To appear in Concurrency and Computation: Practice and Experience, (2006).]]
[34]
Lott, C.M. Comparing Reading and Testing Techniques, Available online at: http://www.chris-lott.org/work/exp/.]]
[35]
Lott, C.M. and Rombach, H.D. Repeatable Software Engineering Experiments for Comparing Defect-Detection Techniques. Empirical Software Engineering, 1, 3 (1996), 241--277.]]
[36]
Magee, J. and Kramer, J. Concurrency: State Models & Java Programs. John Wiley & Sons, 1999.]]
[37]
Miller, I. and Freund, J.E. Probability and Statistics for Engineers. Prentice-Hall, 1965.]]
[38]
Miller, J., Daly, J., Wood, M., Roper, M. and Brooks, A. Statistical power and its subcomponents - missing and misunderstood concepts in software engineering empirical research. Journal of Information and Software Technology, 39, 4 (1997), 285--295.]]
[39]
Myers, G. A Controlled Experiment in Program Testing and Code Walkthroughs/Inspections. Communications of ACM, 21, 9 (1978), 760--768.]]
[40]
Novillo, E. and Lu, P., A Case Study of Selected SPLASH-2 Applications and the SBT Debugging Tool. In Proceedings of the 1st International Workshop on Parallel and Distributed Systems: Testing and Debugging (PADTAD), (2003).]]
[41]
Parnas, D.L., The Limits of Empirical Studies of Software Engineering. In Proceedings of the 2003 International Symposium on Empirical Software Engineering (ISESE'03), (2003), 2--5.]]
[42]
Pickard, L.M., Kitchenham, B.A. and Jones, P. Combining Empirical Results in Software Engineering. Information and Software Technology, 40, 14 (1998), 811--821.]]
[43]
Russel, G.W. Experience with inspections in ultralarge-scale development. IEEE Software, 8, 1 (1991), 25--31.]]
[44]
Selby, R.W., Combining Software Testing Strategies: An Empirical Evaluation. In Proceedings of the ACM/SIGSOFT IEEE Workshop on Software Testing, (1986), 82--90.]]
[45]
Shull, F., Basili, V., Carver, J., Maldonado, J.C., Travassos, G.H., Mendonca, M. and Fabbri, S., Replicating software engineering experiments: addressing the tacit knowledge problem. In Proceedings of the 2002 International Symposium on Empirical Software Engineering (ISESE'02), (2002), 7--16.]]
[46]
Sjoberg, D.I.K., Hannay, J.E., Hansen, O., Kampenes, V.B., Karahasanovic, A., Liborg, N.-K. and Rekdal, A.C. A Survey of Controlled Experiments in Software Engineering. IEEE Transactions in Software Engineering, 31, 9 (2005), 733--753.]]
[47]
Szyperski, C. Component Software: Beyond Object-Oriented Programming. Addison-Wesley, 1998.]]
[48]
Tichy, W.F. Should Computer Scientists Experiment More? Computer, 31, 5 (1998), 32--40.]]
[49]
Ur, S. and Eytani, Y., Compiling a Benchmark of Documented Multi-threaded Bugs. In Proceedings of the 2nd International Workshop on Parallel and Distributed Systems: Testing and Debugging (PADTAD), (2004).]]
[50]
Wohlin, C., Runeson, P., Host, M., Ohlsson, M., Regnell, B. and Wesslen, A. Experimentation in Software Engineering. Kluwer, 2000.]]
[51]
Wojcicki, M. and Strooper, P. A State-of-Practice Questionnaire on Verification and Validation for Concurrent Programs. Accepted to the 4th International Workshop on Parallel and Distributed Systems: Testing and Debugging (PADTAD), (2006).]]
[52]
Wood, M., Roper, M., Brooks, A. and Miller, J., Comparing and Combining Software Defect Detection Techniques: A Replicated Empirical Study. In Proceedings of the 6th European Software Engineering Conference, (1997), 262--277.]]
[53]
Zar, J.H. Significance Testing of the Spearman Rank Correlation Coefficient. Journal of the American Statistical Association, 67, 339 (1972), 578--580.]]

Cited By

View all
  • (2014)An Overview of Experimental Studies on Software Inspection ProcessEnterprise Information Systems10.1007/978-3-319-09492-2_8(118-134)Online publication date: 25-Jul-2014
  • (2007)An Iterative Empirical Strategy for the Systematic Selection of a Combination of Verification and Validation TechnologiesProceedings of the 5th International Workshop on Software Quality10.1109/WOSQ.2007.4Online publication date: 20-May-2007
  • (2007)Selecting V&V Technology CombinationsProceedings of the 12th IEEE International Conference on Engineering Complex Computer Systems10.1109/ICECCS.2007.40(87-96)Online publication date: 11-Jul-2007
  • Show More Cited By

Index Terms

  1. Maximising the information gained from an experimental analysis of code inspection and static analysis for concurrent java components

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        ISESE '06: Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering
        September 2006
        388 pages
        ISBN:1595932186
        DOI:10.1145/1159733
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 21 September 2006

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. concurrent Java components
        2. controlled experiment
        3. verification and validation

        Qualifiers

        • Article

        Conference

        ISESE06
        Sponsor:

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)1
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 16 Feb 2025

        Other Metrics

        Citations

        Cited By

        View all
        • (2014)An Overview of Experimental Studies on Software Inspection ProcessEnterprise Information Systems10.1007/978-3-319-09492-2_8(118-134)Online publication date: 25-Jul-2014
        • (2007)An Iterative Empirical Strategy for the Systematic Selection of a Combination of Verification and Validation TechnologiesProceedings of the 5th International Workshop on Software Quality10.1109/WOSQ.2007.4Online publication date: 20-May-2007
        • (2007)Selecting V&V Technology CombinationsProceedings of the 12th IEEE International Conference on Engineering Complex Computer Systems10.1109/ICECCS.2007.40(87-96)Online publication date: 11-Jul-2007
        • (2007)Comparing the Cost-Effectiveness of Statically Analysing and Model Checking Concurrent Java Components for DeadlocksProceedings of the 2007 Australian Software Engineering Conference10.1109/ASWEC.2007.16(223-232)Online publication date: 10-Apr-2007

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media