When a treatment fails to work, a doctor faces a difficult decision: Does the child need more time to respond, more therapy per week or a different treatment entirely? The choice often involves trial and error, based on a clinician’s own research and experience.
Autism researchers are trying to make treatment selection more systematic and tailored to a particular child’s responses to treatment. They are developing ‘adaptive interventions’ in which a set of rules would govern which treatments to choose, for whom, and in what order.
“It’s really asking about whether there is a sequence of treatments that works better for some kids versus others,” says Connie Kasari, professor of human development and psychology at the University of California, Los Angeles. Kasari is spearheading two studies aimed at developing adaptive interventions for autism.
The goal is to give doctors a road map that they can use to as a guide to treatment, through all its twists and turns.
“There’s a critical need for really understanding which intervention works for individual children,” says Zachary Warren, director of the Treatment and Research Institute for Autism Spectrum Disorders at Vanderbilt University in Nashville, Tennessee. (Warren is not involved in adaptive intervention research.) “These adaptive designs are aimed at attempting to do that.”
The concept of an adaptive intervention is not new. For example, when a child is diagnosed with generalized anxiety disorder but doesn’t respond to psychotherapy, treatment guidelines recommend an approach that adjusts to each child over time: Treat the child with medication for 12 weeks. If the child improves, then she should remain on the medication for an additional 12 weeks. If not, she should continue the medication and also receive cognitive behavioral therapy1.
Until about 10 years ago, a group of clinicians would discuss and decide upon the optimal treatment regimen for a given situation based on their expertise and their review of the scientific literature.
“This approach is certainly very important, but it is not as evidence-based as we would like,” says Susan Murphy, professor of statistics and psychiatry at the University of Michigan in Ann Arbor. In 2005, Murphy and her colleagues developed a way to compare different sequences of treatments, generating data for an evidence-based adaptive intervention2.
In this strategy, dubbed ‘sequential multiple assignment randomized trial’ (SMART), researchers randomly assign participants to one of two initial treatments. After a predetermined amount of time — usually a few weeks — they split the participants into those who are responding and those who are not.
The researchers can then randomly assign all of the nonresponders to new groups that, for example, continue with the same treatment, receive enhanced treatment or get a new therapy. They can also split the responders randomly into new groups or have them continue with their initial treatment. At the end of the study, the researchers compare the gains made across the different treatment groups.
A study of this type does not replace traditional trials, as it cannot prove that one sequence of treatments is better or worse than another. Instead, it can point researchers to sequences of treatments that are good candidates for a rigorous test.
The approach is gaining ground, with at least 21 different studies in fields ranging from drug abuse to obesity. But the idea is relatively new to autism research. Last year, Murphy, Kasari and their colleagues published the first SMART study for autism.
The study involved 61 children with autism between the ages of 5 and 8 years who speak few or no words. All of the children received two hours per week of a behavioral intervention called JASP-EMT, which focuses on improving speech, communicative gestures and engagement with others. Half also got an iPad with software that speaks short phrases the child selects on the screen, giving the child an alternative way to communicate3.
After 12 weeks, Kasari and her colleagues further split the children into ‘fast’ responders and ‘slow’ responders. Fast responders continued with the same treatment. Slow responders without iPads were split into two new groups: One received more treatment per week and the other got the same amount of treatment along with an iPad. Slow responders who started with iPads received more treatment hours.
The researchers measured the children’s verbal abilities 12 weeks later. All of the children showed improvements, but those who started with an iPad used about 21 more words on average than those who went without one for the duration of the study. But children who received an iPad halfway through the study did not improve more than those who did not use the devices at all.
The findings do not necessarily mean that every minimally verbal child needs an iPad. But Kasari says her team now includes the device as a standard part of treatment.
In 2013, Kasari launched a SMART study of nearly 200 children at four institutions. Like the previous study, the new project aims to improve communication among minimally verbal school-age children with autism. But this time, the researchers are testing JASP-EMT alongside a behavioral intervention called discrete trial training (DTT), a method of teaching new behaviors that rewards a child for following instructions.
For six weeks, children receive one of the two treatments. All of the children who show significant improvements remain on the same treatment. Parents of half of these children learn to deliver the treatment for the last 10 weeks of the study, to determine whether parent training boosts treatment efficacy.
Children who did not improve significantly in the first study phase also continue their original treatment. But half of them receive the other intervention, too.
The study may help clinicians decide whether a child should start with JASP-EMT or DTT and whether training parents as therapists has value, says Daniel Almirall, assistant professor of social research at the University of Michigan, who collaborated with Kasari on the study’s design. It also may suggest the best course of action for children who do not respond quickly to one therapy or the other: provide more time or tack on a second intervention right away.
Kasari and her colleagues aren’t the only autism researchers embracing the SMART strategy. A team of researchers led by Mary Louise Kerwin at Rowan University in New Jersey received funding this year to compare an adaptive intervention using DTT with one using an approach called verbal behavior therapy. The trial will keep children who respond on their original treatment but randomly assign nonresponders to receive either more hours of their original treatment or the other treatment.
The push to develop adaptive interventions may help the field move toward personalized medicine — a key part of the National Institute of Mental Health’s new strategic plan for research. Beyond boosting the number of children who benefit from autism treatments, adaptive interventions may also prevent parents and clinicians from going too far down a path that isn’t working.
“They can be used to deploy mental health resources more efficiently,” says Joel Sherrill, chief of the institute’s psychosocial treatment research program. “There’s a lot of promise there.”