When a study of a behavioral intervention for autism fails to show a significant improvement in measured outcomes, does that mean the treatment doesn’t work? Not necessarily.
Studying the relationship between behavioral interventions for autism and outcomes is a complex undertaking. Behavioral interventions employ a wide range of methods and vary in their structure, intensity, duration and quality. Autistic people have varying traits and, often, co-occurring conditions, and enrolling in a research study may not be a priority for many families.
All of this means that studies of interventions must be designed so that children who receive the treatment can be compared with those who don’t, with regard to autism characteristics as well as personal and family demographics. Studies must also have a large enough sample and long enough study period to detect an effect if one is present.
When a new study of the effectiveness of an intervention is published, it’s important to go beyond the conclusion statement and dig deep to understand what impediments the researchers faced and why they may not have been successful in detecting a signal amidst the noise.
Our recently published study, “Measuring the association between behavioural services and outcomes in young autistic children,” was highlighted in the 18 January edition of Null & Noteworthy. Our study found that Canadian children who received intervention services that we classified as “behavioral” based on information parents provided at least once during the preschool years did not show better outcomes later in childhood than children who didn’t receive any such services.
We did not detect a significant effect, but several issues may have contributed to this. For example, our measure of adaptive behavior may not have been sensitive enough to pick up important improvements for autistic children. And scores may decline during childhood as the gap between skills expected of neurotypical children and those shown by autistic children increases.
Although we attempted to capture variation among the children enrolled across Canada in the services delivered, it’s possible that the services participants used changed over time. We also found that families with higher household incomes accessed more services. Although behavioral services are publicly subsidized in all provinces, families with higher household incomes may access private services more easily, which complicates our ability to measure the effectiveness of publicly funded services. One of the biggest challenges we and other researchers have faced is that the type, intensity, duration and quality of interventions likely varies among regions and participants, but we lack the specific data to measure and control for this variation.
In our paper we make concrete recommendations for how to improve studies that aim to estimate the effects of autism interventions. These recommendations highlight three key points that readers should look out for to aid their understanding of scientific research on the effectiveness of autism interventions:
Was there a suitable study design? Because of the wide range of autism traits, using a suitable design helps ensure balance between comparator groups. For example, a randomized controlled trial may be an option for a groundbreaking therapeutic intervention. Demonstrating effectiveness in the community, however, typically requires a pragmatic observational design that does not randomize participants but allows for detection of important differences between comparable groups through the data collected.
Did children have access to a range of behavioral interventions? Given the substantial autism program differences across Canada, any multi-jurisdictional study must collect detailed data on eligibility, type, intensity and duration of the intervention of interest, as well as demographic data and baseline traits and functioning, so that similar types of interventions can be compared between groups that are matched on or adjusted for demographic and baseline differences.
Were measures appropriate and sensitive enough to capture important information about services? It’s challenging for parents to accurately recall and report the type, dose and intensity of interventions their children received, particularly when children receive multiple services over several years. Ideally, some measures could be gathered through service providers and include metrics of quality, such as provider training and fidelity or accuracy of implementing the intervention. Parent questionnaires should be designed to capture intervention intensity in a structured and uniform manner, be administered every few months by interviewers using memory aids and have relatively short recall periods. Also, autistic children often receive services in addition to the intervention being evaluated. Concomitant similar services may blur the statistical “signal” of the intervention of interest. Any health, education, mental health, social or community services that children receive should be recorded and an effort made to account for them in the analysis.
Conducting well-designed studies may be costly or difficult, but allocating public or household funds toward ineffective or inadequate autism services is clinically and ethically unjustified. It’s inappropriate to assume that failure to detect a treatment effect necessarily indicates an ineffective treatment. To advance the field, well-designed studies with adequate statistical power to detect effects are needed, including studies of interventions delivered in community settings. This subtlety is often overlooked and can lead people to prematurely disregard potentially effective interventions for children with autism.
Cite this article: https://doi.org/10.53053/QIHE6081