As emphasized in other chapters of this book, useful results in empirical software engineering require a variety of data to be collected through different studies – focusing on a single context or single metric rarely tells a useful story. But, in each study, the requirements of the local context are liable to impose different constraints on study design, the metrics to be collected, and other factors. Thus, even when all the studies focus on the same phenomenon (say, software quality), such studies can validly collect a number of different measures that are not at all compatible (say, number of defects required to be fixed during development, number of problem reports received from the customer, total amount of effort that needed to be spent on rework). Can anything be done to build a useful body of knowledge from these disparate pieces? This chapter addresses strategies that have been applied to date to draw conclusions from across such varied but valid data sets. Key approaches are compared and the data to which they are best suited are identified. Our analysis together with associated lessons learned provide decision support for readers interested in choosing and using such approaches to build up useful theories.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Basili, V.R., Selby, R., and Hutchens, D., (1986) Experimentation in software engineering. IEEE Transactions on Software Engineering, 12(7): 733–743.
Basili, V.R., Caldiera, G., and Rombach, H.D., (1994a) Experience factory. In Encyclopedia of Software Engineering, John, J. Marciniak, (ed.) Vol. 1, Wiley, New York, pp. 469–476.
Basili, V.R.,. Caldiera, G., and Rombach, H.D., (1994b) Goal question metric paradigm. In Encyclopedia of Software Engineering, John, J. Marciniak, (ed.) Vol. 1, Wiley, New York, pp. 528–532.
Basili, V.R., Shull, F., and Lanubile, F., (1999) Building knowledge through families of experiments. IEEE Transactions on Software Engineering, 25(4): 456–474.
Dangle, K., Dwinnell, L., Hickok, J., and Turner, R., (2005) Introducing the department of defense acquisition best practices clearinghouse. CrossTalk, 18(5): 4–5.
Feldmann, R., Shull F., and Shaw, M., (2006) Building decision support in an imperfect world. Proceedings of International Symposium on Empirical Software Engineering (ISESE), Vol. II, Rio de Janeiro, Brazil, pp. 33–35.
Galin D. and Avrahami, M., (2005) Do SQA programs work–CMM work. A meta analysis. Proceedings of IEEE International Conference on Software–Science, Technology and Engineering (SwSTE05), Herzelia, Israel, pp. 95–100.
Hayes, W., (1999) Research synthesis in software engineering: a case for meta-analysis. Proceedings of the Sixth International Software Metrics Symposium (METRICS’99), Boca Raton, FL, p. 143.
Jørgensen, M., (2004) A review of studies on expert estimation of software development effort. Journal of Systems and Software, 70(1–2): 37–60.
Jørgensen, M., and Moløkken-Østvold, K. J., (2006) How large are software cost overruns? Critical comments on the Standish group’s CHAOS reports. Information and Software Technology, 48(4): 297–301.
Kitchenham, B. (2004) Procedures for Performing Systematic Reviews, Joint Technical Report, Keele University TR/SE-0401 and NICTA 0400011T.1.
Kitchenham, B., Dybå, T., and Jørgensen, M., (2004) Evidence-based software engineering. Proceedings of the International Conference on Software Engineering, Edinburgh, UK, pp. 273–281.
Kitchenham, B., Mendes, E., and Travassos, G. H., (2006) Systematic review of cross- vs. within-company cost estimation studies. Proceedings of the Evaluation & Assessment in Software Engineering (EASE), pp. 89–98.
Koyani, S.J., Bailey, R.W., and Nall, J.R., (2003) Research based web design and usability guidelines. National Cancer Institute. Available for download at http://usability.gov/pdfs/guidelines.html.
Mendes, E., (2005) A systematic review of web engineering research. Proceedings of the ACM/IEEE International Symposium on Empirical Software Engineering, Noosa Heads, Australia, pp. 408–418.
Miller, J., (2000) Applying meta-analytical procedures to software engineering experiments. Journal of Systems and Software, 54: 29–39.
Mohagheghi, P., and Conradi, R., (2006) Vote-counting for combining quantitative evidence from empirical studies–An example. Proceedings of International Symposium on Empirical Software Engineering (ISESE), Vol. II, Rio de Janeiro, Brazil, pp. 24–26.
Noblitt, G.W., and Hare, R.D, (1988) Meta-Ethnography: Synthesizing Qualitative Studies (Qualitative Research Methods), Sage Publications Ltd., Thousand Oaks, CA.
Paterson, B., Thorne, S., Canam, C., and Jillings, C., (2001) Meta-Study of Qualitative Health Research: A Practical Guide to Meta-Analysis and Meta-Synthesis, Sage Publications Inc, Thousand Oaks, CA.
Perry, D., Porter, A., and Votta, L., (2000) Empirical studies of software engineering: a roadmap. Proceedings of International Conference on Software Engineering, Limerick, Ireland.
Petticrew, M. and Roberts, H., (2006) Systematic Reviews in the Social Sciences. A Practical Guide, Blackwell Publishing, Oxford.
Port, D., Kazman, R., Nakao, H., Hoshino, N., and Miyamoto, Y., (2006) Investigating a constructive scorecard model for creating meaningful quantitative data from qualitative inputs. Proceedings of International Symposium on Empirical Software Engineering (ISESE), Vol. II, Rio de Janeiro, Brazil, pp. 27–29.
Shull, F. and Turner, R., (2005) An empirical approach to best practice identification and selection: the US department of defense acquisition best practices clearinghouse. Proceedings of International Symposium on Empirical Software Engineering (ISESE), Noosa Heads, Australia, pp. 133–140.
Sjøberg, D.I.K, Dybå, T., Anda, B.C.D., and Hannay, J.E., (2007a) Building theories in software engineering. In Advanced Topics in Empirical Software Engineering: A Handbook, Shull, F., Singer, J., and Sjøberg, D.I.K (eds.), Springer, Berlin.
Sjøberg, D.I.K., (2007b) Documenting theories. In Experimental Software Engineering Issues: Assessment and Future, Basili, V.R., Rombach, D., Schneider, K., Kitchenham, B., Pfahl, D. and Selby, R, (eds.), Springer-Verlag, Berlin Heidelberg, pp. 111–114.
van Solingen, R. and Berghout, E., (1999) The Goal/Question/Metric Method, McGraw-Hill Education, New York.
Zelkowitz, M., (2001) Models for industrial validation of new technology. ISERN workshop at Strathclyde University. Available via http://isern.iese.de/network/ISERN/pub/meetings/Glasgow2001/Agenda.htm.
Zelkowitz, M. and Wallace, D., (1998) Experimental models for validating technology. IEEE Computer, 31(5), pp. 23–31.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2008 Springer-Verlag London Limited
About this chapter
Cite this chapter
Shull, F., Feldmann, R.L. (2008). Building Theories from Multiple Evidence Sources. In: Shull, F., Singer, J., Sjøberg, D.I.K. (eds) Guide to Advanced Empirical Software Engineering. Springer, London. https://doi.org/10.1007/978-1-84800-044-5_13
Download citation
DOI: https://doi.org/10.1007/978-1-84800-044-5_13
Publisher Name: Springer, London
Print ISBN: 978-1-84800-043-8
Online ISBN: 978-1-84800-044-5
eBook Packages: Computer ScienceComputer Science (R0)