Skip to main content
Log in

Improving the Eligibility of Task-Based fMRI Studies for Meta-Analysis: A Review and Reporting Recommendations

  • Review
  • Published:
Neuroinformatics Aims and scope Submit manuscript

Abstract

Decisions made during the analysis or reporting of an fMRI study influence the eligibility of that study to be entered into a meta-analysis. In a meta-analysis, results of different studies on the same topic are combined. To combine the results, it is necessary that all studies provide equivalent pieces of information. However, in task-based fMRI studies we see a large variety in reporting styles. Several specific meta-analysis methods have been developed to deal with the reporting practices occurring in task-based fMRI studies, therefore each requiring a specific type of input. In this manuscript we provide an overview of the meta-analysis methods and the specific input they require. Subsequently we discuss how decisions made during the study influence the eligibility of a study for a meta-analysis and finally we formulate some recommendations about how to report an fMRI study so that it complies with as many meta-analysis methods as possible.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Acar, F., Maumet, C., Heuten, T., Vervoort, M., Bossier, H., Seurinck, R., & Moerkerke, B. (2022). Review paper: reporting practices for task fMRI studies. Neuroinformatics.

  • Acar, F., Seurinck, R., Eickhoff, S. B., & Moerkerke, B. (2018). Assessing robustness against potential publication bias in Activation Likelihood Estimation (ALE) meta-analyses for fMRI. PLoS ONE, 13, 1–23. https://doi.org/10.1371/journal.pone.0208177

    Article  CAS  Google Scholar 

  • Bossier, H., Nichols, T. E., & Moerkerke, B. (2019). Standardized effect sizes and image-based meta-analytical approaches for fMRI data. bioRxiv. https://doi.org/10.1101/865881

  • Bowring, A., Maumet, C., & Nichols, T. E. (2019). Exploring the impact of analysis software on task fMRI results. Human Brain Mapping, 40, 3362–3384.

    Article  PubMed  PubMed Central  Google Scholar 

  • Button, K. S., Ioannidis, J. P., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S., & Munafo, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376.

    Article  CAS  PubMed  Google Scholar 

  • Carp, J. (2012). The secret lives of experiments: Methods reporting in the fMRI literature. NeuroImage, 63, 289–300.

    Article  PubMed  Google Scholar 

  • Chen, G., Taylor, P. A., & Cox, R. W. (2017). Is the statistic value all we should care about in neuroimaging? NeuroImage, 147, 952–959.

    Article  PubMed  Google Scholar 

  • Cooper, H., & Hedges, L. V. (2009). The Handbook of Research Synthesis. Russell Sage Foundation.

  • Costafreda, S. G. (2012). Parametric coordinate-based meta-analysis: Valid effect size meta-analysis of studies with differing statistical thresholds. Journal of Neuroscience Methods, 210, 291–300.

    Article  PubMed  Google Scholar 

  • Cox, R. W. (1996). AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical Research, 29, 162–173.

    Article  CAS  PubMed  Google Scholar 

  • Cox, R. W., & Hyde, J. S. (1997). Software tools for analysis and visualization of FMRI Data. NMR in Biomedicine, 10, 171–178.

    Article  CAS  PubMed  Google Scholar 

  • Durnez, J., Moerkerke, B., & Nichols, T. E. (2014). Post-hoc power estimation for topological inference in fMRI. NeuroImage, 84, 45–64.

    Article  PubMed  Google Scholar 

  • Eickhoff, S. B., Nichols, T. E., Laird, A. R., Hoffstaedter, F., Amunts, K., Fox, P. T., & Eickhoff, C. R. (2016). Behavior, Sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation. Neuroimage, 137, 70–85.

    Article  PubMed  Google Scholar 

  • Eickhoff, S. B., Bzdok, D., Laird, A. R., Kurth, F., & Fox, P. T. (2012). Activation likelihood estimation revisited. NeuroImage, 59, 2349–2361.

    Article  PubMed  Google Scholar 

  • Eickhoff, S. B., Laird, A. R., Grefkes, C., Wang, L. E., Zilles, K., & Fox, P. T. (2009). Coordinate-based activation likelihood estimation meta-analysis of neuroimaging data: A random-effects approach based on empirical estimates of spatial uncertainty. Human Brain Mapping, 30, 2907–2926.

    Article  PubMed  PubMed Central  Google Scholar 

  • Fisher, R. A. (1925). Statistical Methods for Research Workers. Oliver and Boyd (Edinburgh).

  • Fox, P. T., Laird, A. R., Fox, S. P., Fox, P. M., Uecker, A. M., Crank, M., & Lancaster, J. L. (2005). Brainmap taxonomy of experimental design: description and evaluation. Human Brain Mapping, 25, 185–198.

    Article  PubMed  PubMed Central  Google Scholar 

  • Fox, P. T., & Lancaster, J. L. (2002). Mapping context and content: The BrainMap model. Nature Reviews Neuroscience, 3, 319–321.

    Article  CAS  PubMed  Google Scholar 

  • Friston, K. J., Stephan, K. E., Lund, T. E., Morcom, A., & Kiebel, S. (2005). Mixed-effects and fMRI studies. NeuroImage, 24, 244–252.

    Article  CAS  PubMed  Google Scholar 

  • Gorgolewski, K., Esteban, O., Schaefer, G., Wandell, B., & Poldrack, R. (2017). OpenNeuro—a free online platform for sharing and analysis of neuroimaging data (p. 1677). Vancouver, Canada: Organization for Human Brain Mapping.

    Google Scholar 

  • Gorgolewski, K. J., Alfaro-Almagro, F., Auer, T., Bellec, P., Capotă, M., Chakravarty, M. M., & Poldrack, R. A. (2017). BIDS apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods. PLOS Computational Biology, 13, 1–16.

    Article  Google Scholar 

  • Jennings, R. G., & Van Horn, J. D. (2012). Publication bias in neuroimaging research: implications for meta-analyses. Neuroinformatics, 10, 67–80.

    Article  PubMed  PubMed Central  Google Scholar 

  • Kober, H., Barrette, L. F., Joseph, J., Bliss-Moreau, E., Lindquis, K., & Wager, T. D. (2008). Functional grouping and cortical-subcortical interactions in emotion: A meta-analysis of neuroimaging studies. NeuroImage, 42, 998–1031.

    Article  PubMed  Google Scholar 

  • Laird, A. R., Lancaster, J. J., & Fox, P. T. (2005). BrainMap: The social evolution of a functional neuroimaging database. Neuroinformatics, 3, 65–78.

    Article  PubMed  Google Scholar 

  • Lindquist, M. A. (2008). The statistical analysis of fMRI data. Statistical Science, 23, 439–464.

    Article  MathSciNet  Google Scholar 

  • Logothetis, N. K. (2008). What we can do and what we cannot do with fMRI. Nature, 453, 869–878.

    Article  ADS  CAS  PubMed  Google Scholar 

  • Maumet, C., Auer, T., Bowring, A., Chen, G., Das, S., Flandin, G., & Nichols, T. G. (2016). Sharing brain mapping statistical results with the neuroimaging data model. Scientific Data, 3, 1–15.

    Article  Google Scholar 

  • Maumet C, & Nichols TE (2016). Minimal data needed for valid & accurate image-based fMRI meta-analysis. bioRxiv. https://doi.org/10.1101/048249

  • Mehta, R. K., & Parasuraman, R. (2013). Neuroergonomics: a review of applications to physical and cognitive work. Frontiers in human neuroscience, 7, 889. https://doi.org/10.3389/fnhum.2013.00889

    Article  PubMed  PubMed Central  Google Scholar 

  • Mosteller, F., & Bush, R. R. (1954). Selected quantitative techniques. In G. Lindzey (Ed.), Handbook of social psychology (Vol. 1).

  • Müller, V. I., Cieslik, E. C., Laird, A. R., Fox, P. T., Radua, J., Mataix-Cols, D., & Eickhoff, S. B. (2018). Ten simple rules for neuroimaging meta-analysis. Neuroscience Biobehavioral Reviews, 84, 151–161. https://doi.org/10.1016/j.neubiorev.2017.11.012

    Article  PubMed  Google Scholar 

  • Mumford, J. A., & Nichols, T. E. (2008). Power calculation for group fMRI studies accounting for arbitrary design and temporal autocorrelation. NeuroImage, 39, 261–268.

    Article  PubMed  Google Scholar 

  • Nichols, T. (2012). SPM PLOT UNITS. Retrieved from SPM PLOT UNITS: https://blog.nisox.org/2012/07/31/spm-plot-units/

  • Nichols, T. E., Das, S., Eickhoff, S. B., Evans, A. C., Glatard, T., Hanke, M., Yeo, B. T. (2016). Best Practices in Data Analysis and Sharing in Neuroimaging using MRI. bioRxiv. https://doi.org/10.1101/054262

  • Penny, W. D., Friston, K. J., Ashburner, J. T., Kiebel, S. J., & Nichols, T. E. (2011). Statistical parametric mapping: the analysis of functional brain images. Elsevier.

    Google Scholar 

  • Pernet, C. (2014). Misconceptions in the use of the General Linear Model applied to functional MRI: A tutorial for junior neuro-imagers. Frontiers in Neuroscience. https://doi.org/10.3389/fnins.2014.00001

    Article  PubMed  PubMed Central  Google Scholar 

  • Poldrack, R. A., Fletcher, P. C., Henson, R. N., Worsley, K. J., Brett. M., & Nichols, T. E. (2008). Guidelines for reporting an fMRI study. Neuroimage, 409–414.

  • Poldrack, R. A., Baker, C. I., Durnez, J., Gorgolewski, K. J., Matthews, P. M., Munafò, M. R., & Yarkoni, T. (2017). Scanning the horizon: towards transparent and reproducible neuroimaging research. Nature Reviews Neuroscience, 18, 115–126.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  • Radua, J., & Mataix-Cols, D. (2009). Voxel-wise meta-analysis of grey matter changes in obsessive−compulsive disorder. The British Journal of Psychiatry, 195, 393–402.

    Article  PubMed  Google Scholar 

  • Radua, J., & Mataix-Cols, D. (2012). Meta-analytic methods for neuroimaging data explained. Biology of Mood & Anxiety Disorders, 2, 1–11.

    Article  Google Scholar 

  • Radua, J., van den Heuvel, O. A., Surguladze, S., & Mataix-Cols, D. (2010). Meta-analytical comparison of voxel-based morphometry studies in obsessive compulsive disorder vs other anxiety disorders. Archives of General Psychiatry, 67, 701–711.

    Article  PubMed  Google Scholar 

  • Radua, J., Mataix-Cols, D., Phillips, M. L., El-hage, W., Kronhaus, D. M., Cardoner, N., & Surguladze, S. (2012). A new meta-analytic method for neuroimaging studies that combines reported peak coordinates and statistical parametric maps. European Psychiatry, 27, 605–611.

    Article  CAS  PubMed  Google Scholar 

  • Radua, J., Rubia, K., Canales-Rodríguez, E. J., Pomarol-Clotet, E., Fusar-Poli, P., & Mataix-Cols, D. (2014). Anisotropic kernels for coordinate-based meta-analyses of neuroimaging studies. Frontiers in Psychiatry, 5, 1–8.

    Article  Google Scholar 

  • Salimi-Khorshidi, G., Smith, S. M., Keltner, J. R., Wager, T. D., & Nichols, T. E. (2009). Meta-analysis of neuroimaging data: a comparison of image-based and coordinate-based pooling of studies. NeuroImage, 45, 810–823.

    Article  PubMed  Google Scholar 

  • Salo, T., Yarkoni, T., Nichols, T. E., Poline, J-B., Bilgel, M., Bottenhorn, K. L., Jarecka, D., Kent, J. D., Kimbler, A., Nielson, D. M., Oudyk, K. M., Peraza, J. A., Pérez, A., Reeders, P. C., Yanes, J. A., & Laird, A. R. (2022). NiMARE: Neuroimaging meta-analysis research. NeuroLibrehttps://doi.org/10.55458/neurolibre.00007

  • Stouffer, S. A., Suchman, E. A., Devinney, L. C., Star, S. A., & Williams, R. M., Jr. (1949). The American soldier, Vol. 1: Adjustment during army life. Princeton: Princeton University Press.

  • Sutton, A. J., Jones, K. R., Abrams, D. R., Sheldon, T. A., & Song, F. (2000). Methods for meta-analysis in medical research. John Wiley.

    Google Scholar 

  • Szucs, D. A. (2020). Sample size evolution in neuroimaging research: An evaluation of highly-cited studies (1990–2012) and of latest practices (2017–2018) in high-impact journals. NeuroImage. https://doi.org/10.1016/j.neuroimage.2020.117164

    Article  PubMed  Google Scholar 

  • Turkeltaub, P. E., Eden, G. F., Jones, K. M., & Zeffiro, T. A. (2002). Meta-analysis of the functional neuroanatomy of single-word reading: method and validation. NeuroImage, 16, 765–780.

    Article  PubMed  Google Scholar 

  • Turkeltaub, P. E., Eickhoff, S. B., Laird, A. R., Fox, M., Wiener, M., & Fox, P. (2012). Minimizing within-experiment and within-group effects in activation likelihood estimation meta-analyses. Human Brain Mapping, 33, 1–13.

    Article  PubMed  Google Scholar 

  • Wager, T. D., Barrett, L. F., Bliss-Moreau, E., Lindquist, K. A., Duncan, S., Kober, H., Joseph, J., Davidson, M., & Mize, J. (2008). The neuroimaging of emotion. In M. Lewis, J. M. Haviland-Jones, & L. F. Barrett (Eds.), Handbook of emotions (pp. 249–271). The Guilford Press.

    Google Scholar 

  • Wager, T. D., Lindquist, M. A., Nichols, T. E., Kober, H., & Van Snellenberg, J. X. (2009). Evaluating the consistency and specificity of neuroimaging data using meta-analysis. NeuroImage, 45, 210–221.

    Article  Google Scholar 

  • Woolrich, M. W., Behrens, T. E., Beckmann, C. F., Jenkinson, M., & Smith, S. M. (2004). Multilevel linear modelling for fmri group analysis using bayesian inference. NeuroImage, 21, 1732–1747.

    Article  PubMed  Google Scholar 

  • Woolrich, M. W., Jbabdi, S., Patenaude, B., Chappell, B., Makni, M., Behrens, S., & Smith, T. (2009). Bayesian analysis of neuroimaging data in FSL. Neuroimage, 45, 173–186.

    Article  Google Scholar 

  • Worsley, K. J., Liao, C. H., Aston, J., Petre, V., Duncan, G. H., Morales, F., & Evans, A. C. (2002). A general statistical analysis for fMRI data. NeuroImage, 15, 1–27.

    Article  CAS  PubMed  Google Scholar 

  • Yarkoni, T., Poldrack, R. A., Nichols, T. E., Van Essen, D. C., & Wager, T. D. (2011). Large-scale automated synthesis of human functional neuroimaging data. Nature Methods, 8, 665-U95.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

FA, RS and BM would like to acknowledge the Research Foundation Flanders (FWO) for financial support (Grant G.0149.14).

Funding

Fonds Wetenschappelijk Onderzoek,G.0149.14.

Author information

Authors and Affiliations

Authors

Contributions

FA, CM, BM and RS: conceptualization, methodology BM and RS: supervision FA: writing and figures All authors reviewed the manuscript.

Corresponding author

Correspondence to Freya Acar.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

Detail of meta-analysis methods.

Name

Activation Likelihood Estimation (ALE)

Required input

xyz-coordinates of local maxima, sample size studies

Website

http://brainmap.org/

Description

Activation Likelihood Estimation (ALE) is the first developed (2002) and most commonly employed (see Fig. 1.) coordinate-based meta-analysis method for neuroimaging data (Acar et al., 2018; Eickhoff et al., 2009; Turkeltaub et al., 2012; Eickhoff et al., 2012). It is available through an intuitive GUI. As input the ALE algorithm requires xyz-coordinates of local maxima (i.e. peak activation) and the sample size of every study. To account for spatial uncertainty kernels are constructed around every local maximum. The total sum of the values inside the kernel is equal to 1, with higher values at the centre and smaller values at the edges of the kernel. The size of the kernel depends on the number of subjects in each study in the meta-analysis, if there are more subjects in a study it is reasoned that there is more power and less spatial uncertainty, and therefore the kernel will be smaller. Different studies are combined not by taking the weighted average for every voxel but by computing a union of probabilities in every voxel. This is the probability that at least one local maximum was reported close to that specific voxel. After thresholding, the voxels for which the union of probabilities is larger than can be expected by chance are retained. The method is explained in more depth on the website (http://brainmap.org/).

 

ALE is rather a vote-counting than a meta-analysis method, as it does not compute an average but rather a union of probabilities. It is most commonly employed because it can be used through a simple graphical user interface and is linked to a very rich curated dataset with coordinates from neuroimaging studies, the BrainMap database (Laird et al., 2005; Fox & Lancaster, Mapping context and content: The BrainMap model, 2002; Fox et al., 2005).

 

For a paper to be entered into an ALE meta-analysis it needs to report coordinates of local maxima that are elicited after a whole-brain threshold is applied. If your paper is entered into the BrainMap Database, it will be easier to be included as it is automatically in the right format (a .txt-file) for the application. However, it is easy to hand code papers and add them to this text- file or create the file yourself. Furthermore, it would be possible (and relatively easy) to construct the necessary input in the correct format and add it as supplementary material to the publication. Another option is entering the coordinates in Brainspell (http://brainmap.org/)

Name

Third-level General Linear Model (GLM)

Required input

contrast estimates for each voxel, can be accompanied by standard errors for each voxel, information about the scaling of predictors and contrast in every study

Description

Fitting a meta-analytic third-level general linear model can be implemented in FSL (Woolrich et al., 2009) by using FEAT FLAME (Woolrich et al., 2004; Worsley et al., 2002), SPM (Penny et al., 2011) with spm_mfx (Friston et al., 2005) and AFNI (Cox, AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages., 1996; Cox & Hyde, Software tools for analysis and visualization of FMRI Data, 1997) with 3dttest++. Modelling proceeds voxelwise and combines maps with for each voxel contrast estimates and corresponding standard errors obtained across studies to fit a meta-analytic model in each voxel. The resulting maps are then used for inference to detect effects in the brain that are consistent across studies.

 

Both fixed and mixed effects approaches can be implemented in a third level glm meta-analysis. The difference between a mixed effects approach and a fixed effects approach using the third level GLM lies in how the error term is conceived. In the fixed effects approach, the weighting matrix only incorporates the within-study variability. As in the mixed effect approach, this variance is not estimated, but the provided estimate at the study-level is used. A fitting approach that uses both the contrast estimates and the standard errors is considered the preferred method as it combines all data that can be obtained from a study.

 

A third level general linear model, and meta-analysis methods in general, are faced with challenges. As mentioned, the units of the contrast estimates can differ between studies, as the BOLD signal has an arbitrary unit. Units are further defined by scaling of the data when the time-series are normalized (on subject-level), the scaling of the predictors in the design matrix that are involved in the selected contrast (both on subject- and study-level) and the scaling of the selected contrast (both on subject- and study-level) (Nichols, 2012). Knowing exactly how the data was scaled during the original analysis of the study is essential to perform a third level GLM. It is impossible to combine results of studies that employed different scaling methods on the signal, the design matrix, or the contrast. If studies with different scaling are combined with MFX GLM the results are robust if there is no heteroscedasticity. A random effects general linear model is more robust but conservative (Maumet & Nichols, Minimal Data Needed for Valid & Accurate Image-Based fMRI Meta-Analysis, 2016).

 

The scaling of the signal can be derived from the software package that was used to perform the analysis. The scaling of the design matrix should be reported (ideally the peak amplitude is equal to 1). Likewise, the scaling of the contrast should be reported when describing the analysis, ideally the sum of the positive part is equal to 1 and the sum of the negative part is equal to -1.

 

Reporting and retrieving information about scaling proves to be a laborious process. Not all information is eligible to be fitted into the format of an academic paper. A solution to this is NIDM (the neuroimaging data model) (Maumet et al., 2016) which is a set of specifications to describe neuroimaging data using a machine-readable JSON format.

Name

Seed-based d-mapping (formerly SDM and ES-SDM)

Required input

xyz-coordinates of local maxima, statistic related to local maxima, t-maps, sample size

Website

https://www.sdmproject.com/

Description

Seed-based d mapping, formerly SDM and ES-SDM (Radua & Mataix-Cols, Voxel-wise meta-analysis of grey matter changes in obsessive−compulsive disorder, 2009; Radua et al., 2010, 2012, 2014; Radua & Mataix-Cols, Meta-analytic methods for neuroimaging data explained, 2012), is a method for the meta-analysis of neuroimaging data and was first published in 2009. Out of all CBMA methods it is most similar to a classic meta-analysis. It allows for the combination of studies that report t-maps and information on the location and height of local maxima. The t-values are combined and thresholded. If no information on the height of a local maximum is provided the value is estimated from the threshold that was applied in the original study. The toolbox can be downloaded from its website (https://www.sdmproject.com/).

 

There are two ways for a study to be entered into a seed-based d mapping meta-analysis. The preferred way is by publishing a map with t-statistics resulting from the study, these t-maps can then be combined. A second way is by publishing the coordinates and height of local maxima. The statistic that signifies the height is then transformed to a t-value and missing values are imputed. If all studies you wish to include in the meta-analysis report t-maps however, an IBMA is preferred to seed-based d-mapping.

 

The main advantage of seed-based d mapping is that it allows to combine both image-based and coordinate-based results, allowing to include as many of the published neuroimaging results as possible. If the height is not reported it is possible to infer it from the reported whole-brain threshold, but this will result in a meta-analysis that might underestimate the size of the effect.

Name

Neurosynth

Required input

xyz-coordinates of local maxima, can be automatically harvested from the paper

Website

https://neurosynth.org

Description

Neurosynth is an online meta-analysis method for neuroimaging data that was first published in 2011 (Yarkoni et al., 2011). On the website of neurosynth (https://www.neurosynth.org) we read: ``Neurosynth is a platform for large-scale, automated synthesis of functional magnetic resonance imaging (fMRI) data. It takes thousands of published articles reporting the results of fMRI studies, chews on them for a bit, and then spits out images". The development of neurosynth is on GitHub (https://github.com/neurosynth/) and contributions are encouraged.

 

Neurosynth is characterized by the possibility to perform an automated meta-analysis of published articles available on the internet. Neuroimaging studies available online are scanned for specific terms that occur at a high frequency in the paper. To perform a meta-analysis two maps are created, one based on the coordinates reported in articles with the term of interest and one based on the coordinates reported in articles without the term of interest. Based on these maps conditional probabilities are computed, i.e. what is the chance of finding activation if a term is present. Furthermore, posterior probability maps are created. These are maps that display the likelihood of a term being mentioned in a study if activation is found in a specific voxel.

 

For a study to be adopted in a NeuroSynth meta-analysis activation needs to be reported in the form of coordinates in a published article that mentions the term of interest. However, not all published articles can be entered into a NeuroSynth meta-analysis. Journals have different ways to typeset a paper, therefore for every journal a new algorithm needs to be developed to extract coordinates from articles. Up to now the algorithms that have been developed are able to read the HTML version of an article published in Frontiers, HighWire, Journal of Cognitive Neuroscience, Plos, Sage, ScienceDirect, Springer or Wiley. Contributions to include publishers or journals that are not currently supported are encouraged and can be done through GitHub (https://github.com/neurosynth/ace)

Name

Multilevel Kernel Density Analysis (MKDA)

Required input

xyz-coordinates of local maxima, sample sizes of individual studies, random or fixed population inference method in original study

Description

Multilevel Kernel Density Analysis (MKDA) is a coordinate-based meta-analysis method first published in 2009 (Kober et al., 2008) (Wager et al., 2008) (Wager et al., 2009). It is available as a MATLAB toolbox and the code is available on GitHub, where everyone can contribute (https://github.com/canlab/Canlab_MKDA_MetaAnalysis)

 

MKDA constructs a map for every study, assigning a value of 1 to every reported local maximum. All other voxels have a value of 0. Around every reported coordinate a sphere with a fixed radius of 10mm is constructed and every voxel inside this sphere also receives a value of 1. If two spheres overlap the value for the overlapping voxels remains 1. Maps are combined by computing a weighted average for every voxel. Studies are weighted by the square root of the sample size multiplied by an adjustment weight for the type of the analysis used for population inference (1 for random, .75 for fixed). The resulting map is then thresholded to identify which proportions are larger than can be expected by chance.

 

To perform an MKDA meta-analysis the coordinates of local maxima, sample size and type of analysis used for population inference are required. This needs to be entered into a table which is constructed by the researcher conducting the meta-analysis.

Name

Parametric voxel-based meta-analysis

Required input

At least one statistical value for one voxel per study, can be entire statistical maps

Description

Parametric voxel-based meta-analysis (PVM) (Costafreda, 2012) is a method developed for meta-analysis of neuroimaging data. Coordinate-based meta-analyses were developed because results of fMRI studies are typically reported in the form of the locations of local maxima that survived statistical thresholding. However, different studies often apply different thresholds. Parametric voxel-based meta-analysis is, next to seed-based d mapping, the only CBMA method that takes the applied threshold into account and allows for the combination of imaging maps and reported local maxima. The method combines statistically significant results, in the form of local maxima, with information that is available on locations that did not reach statistical significance to obtain asymptotically unbiased meta-analytical summaries. The main advantages are that it allows for the integration of results obtained by different statistical thresholds, both statistically significant and non-significant, and allows for the integration of small volume correction analyses.

 

Integrating statistically non-significant findings is a challenge for CBMA methods. PVM estimates an interval in which the value will lie. In PVM for every study two maps are created, representing the minimum and maximum possible effect size for every voxel. If an exact value is available the two maps will contain the same value for that voxel, else the upper and lower bound of the interval are represented.

 

To conduct a PVM almost all known reporting practices can be included. The minimal requirement is one statistical value that can be converted to an effect size for at least one voxel. However, if more information is available this is obviously beneficial for the meta-analysis. Full statistical maps (if the statistic can be converted to an effect size), coordinates of local maxima either accompanied by thresholding information, their respective height or the maximum value for the cluster can be integrated. Statistical maps can be published on e.g. NeuroVault, coordinates can be reported in the paper (though not optimal and error-prone), through BrainSpell (an open, collaborative platform to classify neuroimaging results, http://brainspell.org/) or BrainMap and thresholding information, the height of local maxima or small volume correction results can be reported in the paper. However, it can be a time-consuming and error-prone task to interpret results from a paper. Because several different inputs are possible the methods require a lot of effort from the researcher who conducts the PVM meta-analysis. All data needs to be converted to one common format on which the meta-analysis can be performed. Additionally, no toolbox or code has been made available for researchers to use, making it very hard for a researcher to perform a parametric voxel-based meta-analysis. On the other hand, since PVM allows for the inclusion of all possible reporting practices of fMRI studies it might be worth the effort to include all performed studies.

 Name

Stouffer's average of z

Required input

z-maps for every study, can be enriched with sample sizes from every study

Description

Stouffer's average of z is a method that combines z-scores over studies and was developed in 1949 for behavioural meta-analysis (Stouffer, Suchman et al., 1949). If z-values are available on voxel-level, the method can be applied for fMRI meta-analysis in a voxelwise manner. Given that z-statistics are combined, the method is based on statistical significance and not on the combination of effects represented by real units (e.g. effect sizes).

 

The method of Stouffer can easily be extended to a weighted version where study-level weights are used to allow some studies to have a larger influence than others in obtaining the test statistic.

 

Several factors can influence the weight but most commonly the number of subjects of every study is used because sample size is directly related to power to detect true activation. There is also an indication that studies with small sample sizes are more sensitive to vibration of effects (small studies are more likely to report a wide range of estimates) than studies with large sample sizes (Jennings & Van Horn, 2012).

Name

Fisher's sum of p

Required input

p-maps for every study

Description

Fisher's sum of p was developed in 1925 for classic meta-analysis (Fisher, 1925).

 

Fisher's method resembles Stouffer's average of z as it is also based on statistical significance and not on effects represented by real units. There is a near-linear relationship between Fisher's method and Stouffer's method for z-scores between 1 and 5. The main difference between Fisher and Stouffer's method is that it is easier to introduce weights in Stouffer's method (Mosteller & Bush, 1954). Fisher's method combines p-values by taking the sum of -log of the p-values. To perform a Fisher's sum of p meta-analysis p-maps are required. If p-maps are not available, they can be derived from z-maps and t-maps. Fisher's sum of p can only be performed as a fixed effects method. However, it is an easy method to use when only p-values are available.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Acar, F., Maumet, C., Heuten, T. et al. Improving the Eligibility of Task-Based fMRI Studies for Meta-Analysis: A Review and Reporting Recommendations. Neuroinform 22, 5–22 (2024). https://doi.org/10.1007/s12021-023-09643-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12021-023-09643-5

Keywords

Navigation