skip to main content
10.1145/1254882.1254904acmconferencesArticle/Chapter ViewAbstractPublication PagesmetricsConference Proceedingsconference-collections
Article

Synthetic designs: a new form of true experimental design for use in information systems development

Published:12 June 2007Publication History

ABSTRACT

Computer scientists and software engineers seldom rely on using experimental methods despite frequent calls to do so. The problem may lie with the shortcomings of traditional experimental methods. We introduce a new form of experimental designs, synthetic designs, which address these shortcomings. Compared with classical experimental designs (between-subjects, within-subjects, and matched-subjects), synthetic designs can offer substantial reductions in sample sizes, cost, time and effort expended, increased statistical power, and fewer threats to validity (internal, external, and statistical conclusion). This new design is a variation of within-subjects design in which each system user serves in only a single treatment condition. System performance scores for all other treatment conditions are derived synthetically without repeated testing of each subject. This design, though not applicable in all situations, can be used in the development and testing of some computer systems provided that user behavior is unaffected by the version of computer system being used. We justify synthetic designs on three grounds: this design has been used successfully in the development of computerized mug shot systems, showing marked advantages over traditional designs; a detailed comparison with traditional designs showing their advantages on 17 of the 18 criteria considered; and an assessment showing these designs satisfy all the requirements of true experiments (albeit in a novel way).

References

  1. Alison, D. B., Allison, R. L., Faith, M. S., Paultre, F., and Pi-Sunyer, F. Power and money: Designing statistically powerful studies while minimizing financial costs. Psychological Methods, 2, pages 20--33, 1997.Google ScholarGoogle Scholar
  2. Basili, V. The role of experimentation in software engineering: Past, current, and future. Proc. of the 18th Conf. on Soft. Eng., IEEE Society, pages 442--449, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Beck, K. Extreme Programming Explained: Embrace Change. Addison-Wesley, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Campbell, D. T., and Stanley, J. C. Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin, 1963.Google ScholarGoogle Scholar
  5. Cohen, J. Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Hillsdale, NJ: Erlbaum, 1988.Google ScholarGoogle Scholar
  6. Cohen, J. A power primer. Psychological Bulletin, 112, pages 155--159, 1992.Google ScholarGoogle Scholar
  7. Cook, T. D., and Campbell, D. T. Quasi Experimentation: Design and Analysis Issues for Field Settings. Boston: Houghton Mifflin, 1979.Google ScholarGoogle Scholar
  8. Denning, P. J. What is experimental computer science? Comm. of the ACM, 23, pages 543--544, 1980. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Denning, P. J. Is computer science science? Comm. Of the ACM, 48, pages 27--31, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ellis, H. D., Shepherd, J. W., Shepherd, J., Klin, R. H., and Davies, G. M. Identification from a computer-driven retrieval system compared with a traditional mug-shot album: A new tool for police investigations. Ergonomics, 32, pages 167--177, 1989.Google ScholarGoogle Scholar
  11. Feitelson, D. G. Experimental computer science: The need for a culture change. Unpublished manuscript (version of 15 May available on the web), pages 1--35, 2006.Google ScholarGoogle Scholar
  12. Fisher, R. A. Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd, 1925.Google ScholarGoogle Scholar
  13. Fisher, R. A. The Design of Experiments. New York: Hafner Publishing, 1935.Google ScholarGoogle Scholar
  14. Gauch, H. G. Winning the accuracy game. American Scientist, 94, pages 133--141, 2006.Google ScholarGoogle Scholar
  15. Gravetter, F. J. and Forzano, L. B. Research Methods for the Behavioral Sciences. Belmont, CA: Wadsworth/Thomson Learning, 2006.Google ScholarGoogle Scholar
  16. Gregson, R. A. M. Psychometrics of Similarity. New York: Academic Press, 1975.Google ScholarGoogle Scholar
  17. Harmon, L. D. The recognition of faces, Scientific American, 229, pages 70--82, 1973.Google ScholarGoogle Scholar
  18. Hoskins, D., Colbourn, C., and Montgomery, D. Software performance testing using covering arrays. WOSP'05, pages 131--136, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Kelton, W. D., and Barton, R. Experimental design for simulation. In S. Chick, P. Sanchez, D. Ferrin, and D. Morrice (Eds.), Proc. Of the 2003 Winter Simulation Conf., pages 59--65, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Keselman, H. J., Algina, J., and Kowalchuk, R. K. The analysis of repeated measures designs: A review. British J. of Math. and Statistical Psychology, 54, pages 1--20, 2001.Google ScholarGoogle Scholar
  21. Kirk, R. E. Experimental Design: Procedures for the Behavioral Sciences (3rd ed.). Pacific Grove, CA: Brooks/Cole, 1995.Google ScholarGoogle Scholar
  22. Kirk, R. E. Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56(5), pages 746--759, 1996.Google ScholarGoogle Scholar
  23. Kraemer, H. C. To increase power in randomized clinical trials without increasing sample size. Psychopharmacology Bulletin, 27, pages 217--224, 1991.Google ScholarGoogle Scholar
  24. Kujala, S. User involvement: a review of the benefits and challenges. Behavior & Info. Technology, 22, 1--16, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  25. Landauer, T. K. The role of laboratory experiments in HCI: Help, hindrance, or ho-hum? (Panel) CHI'89 Conf. Proc., pages 265--268, 1989. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Laughery, K. R. and Wogalter, M. S. Forensic applications of facial memory research, in A. W. Young and H. D. Ellis (eds.), Handbook of Research on Face Processing. (London: Elsevier), 1989.Google ScholarGoogle Scholar
  27. Lee, B., Barua, A., and Whinston, A. B. Discovery and representation of causal relationships in MIS research: A methodological framework. MIS Quarterly, 21, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Lee, E. S., Whalen, T., Sakalauskas, J., Baigent, G., Bisesar, C., McCarthy, A., Reid, G., and Wotton, C. Suspect identification by facial features. Ergonomics, 47, pages 719--747, 2004.Google ScholarGoogle Scholar
  29. Lindsay, R. C. L., Nosworthy, G. J., Martin, R., and Martynuck, C. Using mug shots to find suspects. J. of Applied Psychology, 79, 121--130, 1994.Google ScholarGoogle ScholarCross RefCross Ref
  30. McClelland, G. H. Optimal design in psychological research. Psychological Methods, 2, pages 3--19, 1997.Google ScholarGoogle Scholar
  31. Martin, D. W. Doing Psychology Experiments (1st ed.). Pacific Belmont, CA: Wadsworth/Thomson Learning, 1977.Google ScholarGoogle Scholar
  32. Maxwell, S. E., and Delaney, H. D. Designing Experiments and Analyzing Data: A Model Comparison Procedure (2nd ed.), 2004.Google ScholarGoogle Scholar
  33. Perry, D., Porter, A. and Votta, L. Empirical studies of software engineering: A roadmap. Proc. of the Conf. on the Future of Soft. Eng., ACM, pages 347--355, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Roberts, B., and Morrissey, D. The use of statistical experimental design techniques for the systematic improvement of an automated, heuristic targeting algorithm. In J. Wilson, J. Henriksen, and S. Roberts (Eds.), Proc. Of the 1986 Winter Simulation Conf., pages 802--807, 1986. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Sachs, J., Welch, W., Mitchell, T., and Wynn, H. Design and analysis of computer experiments. Statistical Science, 4, pages 409--435, 1989.Google ScholarGoogle Scholar
  36. Salvendy, G. (Ed.), Handbook of Human Factors and Ergonomics. Toronto: Wiley, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Shaughnessy, J. J., and Zechmeister, E. B. Research Methods in Psychology (2nd ed.). New York: McGraw-Hill, 1990.Google ScholarGoogle Scholar
  38. Sjøberg, D. I. K., Arisholm, E., and Jorgensen, M. Conducting experiments on software evolution. Proc. of the 4th Int'l Workshop on Principles of Software Evolution, pages 142--145, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Tichy, W., Lukowicz, P., Prechelt, L., and Heinz, E. Experimental evaluation in computer science: A quantitative study. J. Systems Soft., 18, pages 9--18, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Tichy, W. Should computer scientists experiment more. IEEE Computer, 31, pages 32--40, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Venter, A., Maxwell, S. E., and Bolig, E. Power in randomized group comparisons: the value of adding a single intermediate time point to a traditional pretest-posttest design, Psychological Methods, 7, pages 194--209, 2002.Google ScholarGoogle Scholar
  42. Votta, L. Porter, A., and Perry, D. Experimental software engineering: A report on the state of the art. Proc. of the 17th Intl. Conf. on Soft. Eng., ACM, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Wade, M. R. and Tingling, P. A new twist on an old method: A guide to the applicability and use of web experiments in information systems research. The Database for Advances in Information Systems, 36, pages 69--88, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Wilkinson, L. Statistical methods in psychology journals. American Psychologist, 54, pages 594--604, 1999.Google ScholarGoogle Scholar
  45. Winer, B. J., Brown, D. R., and Michels, K. M. Statistical Principles in Experimental Design. New York: McGraw-Hill, 1991.Google ScholarGoogle Scholar
  46. Zannier, C., Melnik, G., and Maurer, F. On the success of empirical studies in the International Conference on Software Engineering, Proc. of the 28th Intl. Conf. on Soft. Eng., ACM, pages 341--350, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Zelkowitz, M., and Wallace, D. Experimental models for validating technology. IEEE Computer, 31, pages 23--31, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Synthetic designs: a new form of true experimental design for use in information systems development

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGMETRICS '07: Proceedings of the 2007 ACM SIGMETRICS international conference on Measurement and modeling of computer systems
      June 2007
      398 pages
      ISBN:9781595936394
      DOI:10.1145/1254882
      • cover image ACM SIGMETRICS Performance Evaluation Review
        ACM SIGMETRICS Performance Evaluation Review  Volume 35, Issue 1
        SIGMETRICS '07 Conference Proceedings
        June 2007
        382 pages
        ISSN:0163-5999
        DOI:10.1145/1269899
        Issue’s Table of Contents

      Copyright © 2007 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 June 2007

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Acceptance Rates

      Overall Acceptance Rate459of2,691submissions,17%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader