Q&A Conversations with experts about noteworthy topics in autism.
Illustration by Maria Hergueta
Opinion / Q&A

U.S. agency backtracks on broad interpretation of ‘clinical trial’

by  /  6 February 2018
The Expert:
Expert

Rebecca Saxe

Professor, Massachussetts Institute of Technology

Autism researchers need no longer worry that their basic research studies will become entangled in the red tape associated with clinical trials.

On 17 January, the U.S. National Institutes of Health (NIH) released revised guidelines indicating what constitutes a clinical trial. It nixes examples from an earlier draft that implied the guidelines would have applied to basic research that involves people.

For decades, the agency recognized clinical trials as studies that test drugs or behavioral therapies on people. In 2014, however, the NIH introduced a new definition that allowed for broad interpretation of what constitutes an ‘intervention’ and an ‘outcome.’ And beginning in January 2017, all clinical trials under this definition would be subject to a new set of rules, which were revised again last month.

The rules require investigators to register and submit their results to a public online repository and undergo best-practices training. Officials described the changes as an effort to improve replicability and make studies involving people more transparent.

In August 2017, researchers objected loudly to a broad interpretation of clinical trials. Among the examples NIH officials gave of clinical trials was a brain-imaging study that does not involve any therapy. Autism researchers found this example particularly upsetting.

Thousands of researchers signed online petitions asking the agency to delay implementing the changes until after considering input from researchers. Others called agency officials or pleaded with them on internet message boards and social media.

Michael Lauer, deputy director for extramural research at the agency, says he fielded dozens of questions from researchers. In response to the outcry, the agency revised the sample case studies at least twice. Among many changes to the guidelines, the latest iteration replaces the controversial brain-imaging example with one that fits the commonsense definition of a clinical trial. The website where the case studies appear says researchers should expect them to evolve.

“We hope to convey greater clarity about which types of studies are applicable to our enhanced stewardship and transparency policies,” Lauer told Spectrum.

The latest changes have put researchers such as Rebecca Saxe at ease. Saxe, professor of cognitive neuroscience at the Massachusetts Institute of Technology, uses brain scans to study how infant brains work. She is also among the authors of a petition that asked the NIH to reconsider its policies; the petition garnered more than 3,500 signatures.

We asked Saxe why she helped draft the petition, her views on the latest revision, and how scientists should view the guidelines.

Spectrum: Under the guidance the NIH released in August, would your studies have been considered clinical trials?

Rebecca Saxe: The majority of my research is basic science with functional magnetic resonance imaging (fMRI) as one of the tools. We ask basic, curiosity-driven questions about how human brains work and develop. And that is, and always has been, basic science. What we heard at the time was that studies like mine would suddenly be considered clinical trials.

S: What where your concerns about those guidelines?

RS: The main problem that I had with calling basic science a clinical trial is that it’s wrong: It sets the wrong standards for success; it sets the wrong standards for rigor. It’s confusing with respect to what the value of our research is. And it makes it impossible to honestly advocate for that research.

The worst thing about the situation was that nobody even knew if they were doing clinical trials research. Researchers didn’t know what funding calls they were eligible for, because clinical trials research and basic research go through different funding pipelines. They didn’t know what [institutional review board] approvals they needed to get; they didn’t know where to register their data or how.

S: How did you become involved in the effort aimed at delaying the proposed changes?

RS: I remember thinking, I’m pretty sure in 2017 if you want to get the word out about something, you don’t just email everybody you know. I tweeted about it on the second of June:

[CNists is short for ‘cognitive neuroscientists’.] That was actually the first way I got involved.

In August, three or four of us wrote the big open letter. We wrote that letter and within 48 hours, or maybe three days, it had passed through all the networks. It was being sent by email chains through groups at universities, to professional societies and so on.

S: What kind of response did you receive from the NIH?

RS: At the time, we felt like nobody heard or responded at all. It felt like they were not listening, not hearing, not reading, not paying any attention.

S: What has happened since?

RS: The case studies changed twice, so that must be in response to something. They were first made worse.

As of 4 January, there were three basic science fMRI studies in the list of case studies, and two of them were judged to not be clinical trials. The third one was judged to be a clinical trial — but there was no apparent relevant difference between them. That seemed like a perfect synecdoche for the chaos of the situation.

Then, there was a change for the better on 17 January. Missing from the guidelines were several examples of studies that should never have been considered clinical trials. And now, by their omission, they will not be. For instance, the one basic science study that had been inappropriately labeled a clinical trial has been removed and replaced with an example that is a clinical trial. This is the NIH actually trying to do what it should have done all along — create guidelines that delineate clinical trials in a way that respects common sense.

S: Can researchers rely on the latest NIH guidelines as they plan their grant applications?

RS: I understand why scientists feel like they can’t trust these case studies. If the case studies can change every week, that’s very unsettling. The process by which the original definition was written and applied, and then reinterpreted, didn’t feel transparent or systematic.

There may be a little bit of confusion for a few more months while this shakes out. But I do feel optimistic that we’re heading out of that tunnel now. I think scientists should recognize that the new case studies are better and that the intent is to apply the rules in a commonsense way.