We are unable to display your institutional affiliation without JavaScript turned on.
Browse Book and Journal Content on Project MUSE

Find using OpenURL

Buy This Issue

Meta-Analysis, Mega-Analysis, and Task Analysis in fMRI Research
In lieu of an abstract, here is a brief excerpt of the content:

Lloyd (2011) presents highly suggestive results regarding the specificity of the link between particular brain areas and cognitive tasks. Some of his evidence is derived from the analysis of data from the BrainMap database (available: www.brainmap.org), which has become a fundamental resource for the conduct of functional neuroimaging meta-analysis. In the present note, some observations regarding the possibilities and pitfalls of meta-analysis of functional neuroimaging data are given as a complement to Lloyd's excellent exposition of the topic. Additionally, some comments are made on the particular meta-analytic results presented by Lloyd.

Functional neuroimaging studies usually present their findings in the form of brain maps. Such maps depict the areas where statistically significant activity during an active task of interest has been detected in a sample of subjects, relative to a selected baseline condition. This basic strategy has underpinned the exponential development of the field over the past twenty years, and particularly since the advent of functional magnetic resonance imaging (fMRI), a safe standard for picturing the brain at work. Whereas most papers in the field report the results from a single experiment, there has been a long-standing interest in pooling the results from experiments that are related but independent.

The most immediate benefit of this pooling approach consists of an increase in brain mapping accuracy. As with any experimental approach dependent on statistical analysis, the fMRI mapping strategy is liable to false-positive findings, if some of the brain areas declared to be active during the experiment were not, in fact, engaged by the active task. It may also suffer from false-negative reporting, when some of the brain areas truly active during the task are not recognized as such. Pooling the results from several related functional neuroimaging studies may be beneficial in detecting false-positive findings, because they are unlikely to be replicated across studies (Turkeltaub et al. 2002). Under certain conditions, data pooling may also result in an increase of power to detect brain activations, and therefore a decrease in false-negative results. The potential benefits of pooling the results from several fMRI studies to increase brain mapping accuracy have long been recognized in the field (Fox et al. 1998), and has led to a varied and growing meta-analysis literature (see, for example, Costafreda et al. 2008; Turkeltaub et al. 2002; Wager et al. 2007).

As a consequence of this growing awareness of the potential of pooling functional neuroimaging studies, recent years have seen the development of large-scale functional neuroimaging databases, storing experimental results from the raw, original formats (the fMRI time-series; i.e., the successive images obtained from each subject as the experiment unfolds, for example, www.fmridc.org) to processed, analyzed data (coordinates of the location of activation, usually the peak of maximum activation of each active area, e.g., www.brainmap. org). In the functional imaging literature, the term meta-analysis has been reserved for the quantitative analysis of the peak coordinates of groups of related experiments, whereas mega-analysis is used for the analysis of raw data.

Because the raw time-series contains the record of all the measurements obtained during an fMRI experiment, it would seem the obvious prime matter for data pooling. However, three practical difficulties have severely limited the application of this approach. First, fMRI measurements from a single study are often measured in gigabytes. Databasing such large volumes of information, and making it publicly available, is no trivial technical task. Second, fMRI data-sharing initiatives have so far sparked serious objections in the scientific community, which have proven reticent to share data that are difficult, and expensive, to acquire (Koslow 2002). As a consequence of such difficulties, only a very small fraction of fMRI experiments are publicly available for download. Finally, there is currently a paucity of quantitative methods able to cope with the processing complexity of fMRI data mega-analysis. These factors create a classical egg and chicken situation: because very limited data are available for download, limited effort is put into developing mega-analysis methods, in turn further limiting the appeal of data sharing in this format. As a result, almost all the pooling exercises so far have...

You must be logged in through an institution that subscribes to this journal or book to access the full text.


Shibboleth authentication is only available to registered institutions.

Project MUSE

For subscribing associations only.