Skip to main content

Spectrum: Autism Research News

Flawed figures

by  /  14 May 2009
THIS ARTICLE IS MORE THAN FIVE YEARS OLD

This article is more than five years old. Autism research — and science in general — is constantly evolving, so older articles may contain information or theories that have been reevaluated since their original publication date.

fmripic.jpeg

There’s no denying that, in the past two decades, functional magnetic resonance imaging (fMRI) has revolutionized neuroscience. Its colorful, fine-resolution pictures allow scientists to compare patterns of activity in different brain regions during specific tasks.

Every technique has its drawbacks, of course, and many of fMRI’s flaws — such as the fact that it measures blood flow, an indirect measure of neuron activity — are often mentioned in papers and discussed at conferences.

But one flaw is rarely brought up and is apparently more widespread than anyone realized: when choosing from the enormous amounts of data generated from an fMRI experiment, scientists often ‘double dip’, or use the same subset for setting up a hypothesis and for confirming it.

So says a group led by Chris Baker of the National Institutes of Health. In this month’s Nature Neuroscience, Baker reports that at least 57 of the 134 fMRI-based studies published in the top five journals last year based their conclusions on this kind of biased data.

For instance, researchers might first look to see which region of the brain lights up when someone sees a particular facial emotion and mine the same data set for patterns, instead of generating fresh data from that region.

Given how much fMRI is used in autism research, I wrote to Baker to ask how many of the studies he analyzed are about autism, schizophrenia and related disorders.

Frustratingly, Baker won’t reveal the list of papers he used. Doing so would be unfair, he wrote, because he only looked at a year’s worth of studies from a small set of journals. “The problem is much broader than this,” he added.

Instead, he calls upon specialists in each field to examine the literature themselves, and “make their own decisions about which results to trust.”

Withholding this information doesn’t seem to me to be the best approach to fixing the problem. Of course researchers in each field will have to investigate the literature on their own, but why not give them a head start? After all, meaningful scientific discourse depends on experts being able to scrutinize published results.


TAGS:   autism