Skip to main content

Advertisement

Log in

An exploration of bias in meta-analysis: the case of technology integration research in higher education

  • Published:
Journal of Computing in Higher Education Aims and scope Submit manuscript

Abstract

This article contains a second-order meta-analysis and an exploration of bias in the technology integration literature in higher education. Thirteen meta-analyses, dated from 2000 to 2014 were selected to be included based on the questions asked and the presence of adequate statistical information to conduct a quantitative synthesis. The weighted random effects average was g ++ = 0.393, p < .000. The article goes on to report an assessment of the methodological quality of the thirteen studies based on Cooper’s (Research synthesis and meta-analysis: a step-by-step approach. Sage, Thousand Oaks, 2010) seven stages in the development of a meta-analysis. Two meta-analyses were found to have five out of seven stages where methodological flaws could potentially create biased results. Five meta-analyses contained two flawed stages and one contained one flawed stage. Four of the stages where methodological flaws can create bias are described in detail. The final section attempts to determine how much influence the methodological flaws exerted on the results of the second-order meta-analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

Asterisks (*) are meta-analyses in the second-order meta-analysis. Double asterisks (**) are rejects

  • Abrami, P. C., Borokhovski, E., Bernard, R. M., Wade, C. A., Tamim, R., Persson, T., et al. (2010). Issues in conducting and disseminating brief reviews. Evidence and Policy: A Journal of Research, Debate and Practice, 6(3), 371–389. doi:10.1332/174426410X524866.

    Article  Google Scholar 

  • *Bayraktar, S. (2000). A meta-analysis study of the effectiveness of computer assisted instruction in science education (Unpublished doctoral dissertation). Ohio State University, Columbus, OH (UMI Number: 9980398).

  • Bernard, R. M. (2014). Things I have learned about meta-analysis since 1990: Reducing bias in search of “The Big Picture.” Canadian Journal of Learning and Instruction, 40(3). http://www.cjlt.ca/index.php/cjlt/article/view/870

  • Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., & Abrami, P. C. (2014). A meta-analysis of blended learning and technology use in higher education: From the general to the applied. Journal of Computing in Higher Education., 26(1), 87–122. doi:10.1007/s12528-013-9077-3.

    Article  Google Scholar 

  • Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, A., Tamim, R., Surkes, M., et al. (2009). A meta-analysis of three interaction treatments in distance education. Review of Educational Research, 79(3), 1243–1289. doi:10.3102/0034654309333844.

    Article  Google Scholar 

  • Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. (2005). Comprehensive meta-analysis version 2. Englewood, NJ: Biostat.

    Google Scholar 

  • Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester: Wiley.

    Book  Google Scholar 

  • Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2010). A basic introduction to fixed effect and random effects models for meta-analysis. Research Synthesis Methodology, 1, 97–111. doi:10.1002/jrsm.12.

    Article  Google Scholar 

  • *Christmann, E. P., & Badgett, J. L. (2000). The comparative effectiveness of CAI on collegiate performance. Journal of Computing in Higher Education, 11(2), 91–103.

    Article  Google Scholar 

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Cook, D. A. (2009). The failure of e-learning research to inform educational practice, and what we can do about it. Medical Teacher, 31(2), 158–162. doi:10.1080/01421590802691393.

    Article  Google Scholar 

  • Cooper, H. M. (2010). Research synthesis and meta-analysis: A step-by-step approach (4th ed.). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Cooper, H. M., & Koenka, A. C. (2012). The overview of review: Unique challenges and opportunities when research syntheses are the principal elements of new integrative scholarship. American Psychologist, 67(6), 446–462. doi:10.1037/a0027119.

    Article  Google Scholar 

  • Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 56(2), 455–463. doi:10.1111/j.0006-341X.2000.00455.x.

    Article  Google Scholar 

  • Glass, G. V. (1976). Primary, secondary and meta-analysis of research. Educational Researcher, 5(10), 3–8. doi:10.3102/0013189X005010003.

    Article  Google Scholar 

  • Hammerstrøm, K., Wade, A, & Jørgensen A. M. K. (2010) Searching for studies: A guide to information retrieval for Campbell Systematic Reviews, Supplement 1. Oslo, Norway: The Campbell Collaboration. doi:10.4073/csrs.2010.1. http://www.campbellcollaboration.org/resources/research/new_information_retrieval_guide.php)

  • Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York, NY: Routledge.

    Google Scholar 

  • Hedges, L. V., & Olkin, I. (1985). Statistical aspects of meta-analysis. New York, NY: Academic Press.

    Google Scholar 

  • Higgins, J. P. T., Land, P. W., Anagnostelis, J. A.-C., Baker, N. F., Cappelleri, S. H., Hollis, S., et al. (2012). A tool to assess the quality of a meta-analysis. Research Synthesis Methods, 4, 351–366. doi:10.1002/jrsm.1092.

    Article  Google Scholar 

  • Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-Analysis: Correcting error and bias in research findings (2nd ed.). Thousand Oaks, CA: Sage.

    Google Scholar 

  • *Hsu, Y.-c., (2003). The effectiveness of computer-assisted instruction in statistics education: A meta-analysis. (Unpublished doctoral dissertation). The University of Arizona, Tucson, AZ (UMI Number: 3089963).

  • Jackson, G. B. (1980). Methods for integrative reviews. Review of Educational Research, 50, 438–460. doi:10.3102/00346543050003438.

    Article  Google Scholar 

  • Karich, A. C., Burns, M. K., & Maki, K. E. (2014). Updated meta-analysis of learner control within educational technology. Review of Educational Research. OnlineFirst, March 10, 2014. doi:10.3102/0034654314526064

  • *Koufogiannakis, D., & Wiebe, N. (2006). Effective methods for teaching information literacy skills to undergraduate students: A systematic review and meta-analysis. Evidence-Based Library and Information Practice, 1(3), 3–43.

    Google Scholar 

  • *Larwin, K., & Larwin, D. (2011). A meta-analysis examining the impact of computer-assisted instruction on postsecondary statistics education: 40 years of research, Journal of Research on Technology in Education, 43(3), 253–278. http://www.editlib.org/p/54098/

  • Lefebvre, C., Manheimer, E., & Glanville J. (2011). Chapter 6: Searching for studies. In J. P. T. Higgins & Green, S. (Eds.), Cochrane handbook for systematic reviews of interventions version 5.1.0 (updated March 2011). http://www.cochrane-handbook.org

  • Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.

    Google Scholar 

  • **Merchant, Z., Goetz, E. T., Cifuentes, L., Keeney-Kennicutt, W., & Davis, T. R. (2014). Effectiveness of virtual reality-based instruction on students’ learning outcomes in K-12 and higher education: A meta-analysis. Computers & Education, 70, 29–40.

    Article  Google Scholar 

  • *Michko, G. M. (2007). A meta-analysis of the effects of teaching and learning with technology in undergraduate engineering education (Unpublished doctoral dissertation). University of Huston, Huston, TX (UMI Number: 3289807).

  • **Rolfe, V., & Gray, D. (2011). Are multimedia resources effective in life science education? A meta-analysis. Bioscience Education, 18(December). www.bioscience.heacademy.ac.uk/journal/vol18/beej-18-3.pdf

  • Rothstein, H. R., Sutton, A. J., & Borenstein, M. (Eds.). (2005). Publication bias in meta-analysis—Prevention, assessment and adjustments. Chichester: Wiley.

    Google Scholar 

  • Scammacca, N., Roberts, G., & Stuebing, K. K. (2013). Meta-analysis with complex research designs: Dealing with dependence from multiple measures and multiple group comparisons. Review of Educational Research, 84(3), 328–364. doi:10.3102/0034654313500826.

    Article  Google Scholar 

  • *Schenker, J. D. (2007). The effectiveness of technology use in statistics instruction in higher education: A meta-analysis using hierarchical linear modeling (Unpublished doctoral dissertation). Kent State University, Kent, OH.

  • Schlosser, R. W., Wendt, O., Angermeier, K., & Shetty, M. (2005). Searching for and finding evidence in augmentative and alternative communication: Navigating a scattered literature. Augmentative and Alternative Communication, 21(4), 233–255. doi:10.1080/07434610500194813.

    Article  Google Scholar 

  • Schmid, R. F., Bernard, R. M., Borokhovski, E., Tamim, R. M., Abrami, P. C., Surkes, M. A., et al. (2014). The effects of technology use in postsecondary education: A meta-analysis of classroom applications. Computers & Education, 72, 271–291. doi:10.1016/j.compedu.2013.11.002.

    Article  Google Scholar 

  • Shea, B. J., Grimshaw, J. M., Wells, G. A., Boers, M., Andersson, N., Hamel, C., et al. (2007). Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology, 7(10). doi:10.1186/1471-2288-7-10.

  • *Sitzmann, T. (2011). A meta-analytic examination of the instructional effectiveness of computer-based simulation games. Personnel Psychology, 64, 489–528. doi:10.1111/j.1744-6570.2011.01190.x.

    Article  Google Scholar 

  • *Sosa, G. W., Berger, D. E., Shaw, A. T., & Mary, J. C. (2011). Effectiveness of computer-assisted instruction in statistics: A meta-analysis. Review of Educational Research, 81(1), 97–128. doi:10.3102/0034654310378174.

    Article  Google Scholar 

  • Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research, 81(3), 4–28. doi:10.3102/0034654310393361.

    Article  Google Scholar 

  • *Tekbiyik, A., & Akdeniz, A. R. (2010). A meta-analytical investigation of the influence of computer assisted instruction on achievement in science, Asia-Pacific Forum on Science Learning and Teaching, 11(2). http://www.ied.edu.hk/apfslt/v11_issue2/tekbiyik/index.htm

  • *Timmerman, C. E., & Kruepke, A. (2006). Computer-assisted instruction, media richness and college student performance. Communication Education, 55(1), 73–104. doi:10.1080/03634520500489666.

    Article  Google Scholar 

  • Valentine, J. C., & Cooper, H. M. (2008). A systematic and transparent approach for assessing the methodological quality of intervention effectiveness research: The Study Design and Implementation Assessment Device (Study DIAD). Psychological Methods, 13(2), 130–149. doi:10.1037/1082-989X.13.2.130.

    Article  Google Scholar 

  • **Vogel, J. J., Vogel, D. S., Cannon-Bower, J., Bowers, C. A., Muse, K., & Wright, M. (2006). Computer gaming and interactive simulations for learning: A meta-analysis. Journal of Educational Computing Research, 34(3), 229–243.

    Article  Google Scholar 

  • Viechbauer, W., & Cheung, M.-L. (2010). Outlier and influence diagnostics for meta-analysis. Research Synthesis Methods, 1, 112–125. doi:10.1002/jrsm.11.

    Article  Google Scholar 

  • *Zhao, Y. (2003). Recent developments in technology and language learning: A literature review and meta-analysis. CALICO Journal, 21(1), 7–27.

    Google Scholar 

Download references

Acknowledgments

The development of this article was supported in part by a grant to Bernard and Schmid from the Social Sciences and Humanities Research Council of Canada.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert M. Bernard.

Appendix: Evaluation criteria within Cooper’s (2010) categories

Appendix: Evaluation criteria within Cooper’s (2010) categories

  1. 1.

    Formulating the Problem

    1. 1.1

      Research Question

      Are the research objectives and/or questions clearly stated?

    2. 1.2

      Context of the M-A

      Are the purposes of the M-A described within the context of prior work and current practice?

    3. 1.3

      Time Frame

      Is the time frame defined and adequately justified in the context of the research question and prior M-As?

    4. 1.4

      Contextual Positioning of the Research Problem

      Is the rationale for the M-A adequate, conceptually relevant and supported by empirical evidence?

    5. 1.5

      Experimental and Control Groups

      Are the experimental and control group clearly defined and described in detail?

    6. 1.6

      Outcome Measures

      Are outcome measures relevant to the research question and representative of the outcomes found in real classrooms?

  2. 2.

    Searching the Literature

    1. 2.1

      Inclusion Criteria

      Are the inclusion criteria clearly and operationally stated and described in detail?

    2. 2.2

      Resources Used

      Are the resources used to identify relevant literature representative of the field and exhaustive?

    3. 2.3

      Literature Included

      Is the included literature exhaustive and includes all types of published and unpublished literature?

    4. 2.4

      Search Strategy

      Is the list of search terms provided and appropriate for each individual source (e.g. modifying key words for specific databases)?

  3. 3.

    Extracting Effect Sizes and Coding Study Features

    1. 3.1

      Effect Size Extraction

      Is effect size extraction implemented by at least two raters with a reasonable level of inter-rater reliability?

    2. 3.2

      Study Feature Coding

      Is study feature coding implemented by at least two raters with reasonable inter-rater reliability?

  4. 4.

    Methodological Quality of the Data

    1. 4.1

      Validity of Included Studies

      Are all aspects of validity explicitly and operationally defined and consistently applied across studies?

    2. 4.2

      Publication Bias

      Are procedures for addressing publication bias adequately substantiated and reported in detail?

    3. 4.3

      Independence of Data

      Is the issue of dependency addressed in detail with methods for assuring data independence being appropriate and adequately described?

    4. 4.4

      Effect Size Metrics and Extraction Procedures

      Are the used ES metrics and extraction procedures appropriate and fully described including necessary transformations?

    5. 4.4

      Treatment of Outliers

      Are criteria and procedures for identifying and treating outliers adequately substantiated and reported in detail?

  5. 5.

    Synthesizing effect sizes

    1. 5.1

      Overall Analyses

      Is the overall analysis performed according to standard procedures (e.g., correct model use, homogeneity assessed, standard errors reported, confidence intervals reported)?

    2. 5.2

      Moderator Variable Analyses

      Are moderator variable analyses performed according to the proper analytical model and is appropriate information reported (e.g., Q Between, test statistics provided)?

    3. 5.3

      Post hoc Analysis

      If appropriate to the analysis are post hoc test conducted using appropriate measures for controlling Type I error?

  6. 6.

    Interpreting Evidence

    1. 6.1

      Reporting Statistical Results

      Are the appropriate statistics supplied for all analyses and explained in enough detail that the reader will understand the findings?

    2. 6.2

      Appropriate Interpretation

      Are the results interpreted appropriately and correctly?

  7. 7.

    Presenting the Results

    1. 7.1

      Discussing Results

      Does the discussion relate the results to previous research?

    2. 7.2

      Emphasis

      Does the interpretation place emphasis on the main findings?

    3. 7.3

      Limitations to the Results

      Does the discussion expose and explain limitations to the M-A?

    4. 7.4

      Application to Practice

      Does the discussion provide advice to other researchers, practitioners, policy makers, etc.?

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bernard, R.M., Borokhovski, E., Schmid, R.F. et al. An exploration of bias in meta-analysis: the case of technology integration research in higher education. J Comput High Educ 26, 183–209 (2014). https://doi.org/10.1007/s12528-014-9084-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12528-014-9084-z

Keywords

Navigation