Skip to main content

Spectrum: Autism Research News

Deep-learning model may accurately predict autism diagnosis

by  /  28 October 2021
Steady prediction: A new model that screens for autism based on medical records performs consistently across U.S. counties, suggesting that it may help address structural barriers such as geographic differences in medical record-keeping.

Courtesy of Ishanu Chattopadhyay

Listen to this story:

A new deep-learning model appears to outperform a widely used screening test at flagging toddlers with autism. The algorithm, described in Science Advances on 6 October, generates predictions based on patterns of conditions that often co-occur with autism.

“We’ve known for some time that children with autism suffer from much higher rates of many diseases, including immunological and gastrointestinal conditions,” says lead investigator Ishanu Chattopadhyay, assistant professor of medicine at the University of Chicago in Illinois. “In this study, we tried to leverage the underutilized aspects of the medical history to assess individual risk.”

Doctors typically screen children for autism at 18 and 24 months of age using parent questionnaires, such as the Modified Checklist for Autism in Toddlers (M-CHAT), the accuracy of which can be affected by cultural and language barriers. Most of the children the M-CHAT flags for further assessment — 85 percent — turn out not to have autism. Those false positives extend waiting times for specialist evaluations and delay diagnosis and intervention for children who do have the condition.

“The wait time from getting a positive in an M-CHAT screen to getting a targeted autism assessment might take a year,” Chattopadhyay says.

Because the new model is more accurate than the M-CHAT, it could whittle down the wait time for a diagnosis, Chattopadhyay says.

It’s unclear how well the model might work in a clinical setting, but the diagnostic delay for many with autism is so significant that “anything that helps, even if a little, could have value,” says Thomas Frazier, professor of psychology at John Carroll University in University Heights, Ohio.

Deep learning:

The researchers trained their model to identify diagnostic codes grouped into 17 categories of conditions associated with autism, including immunological disorders and infectious diseases. The algorithm combed the electronic health records of more than 4 million children aged 6 and younger, including 15,164 with autism, from a U.S. national insurance claims database.

The algorithm compared the patterns of co-occurring conditions between the autistic and non-autistic children to generate an ‘autism comorbid risk score’ (ACoR), an estimate of how likely a child with a particular history of comorbidities is to later be diagnosed with autism. A score above a certain threshold indicates that a child should be referred for diagnostic testing and possible intervention.

The ACoR accurately identified about 82 percent of autistic children at just over 2 years of age; accuracy improved to 90 percent by age 4. Children flagged by the ACoR were at least 14 percent more likely to have autism than those identified by the M-CHAT in a 2019 study conducted at the Children’s Hospital of Philadelphia.

The team got similar results when they validated the model using records from 377 autistic and 37,635 non-autistic children, aged 6 and younger, who had been seen at the University of Chicago Medical Center between 2006 and 2018.

In both datasets, the model flagged children more than two years earlier, on average, than when they received a formal diagnosis. The researchers caution that delays in access to specialty care account for part of the difference, although that gap is most likely no more than a year.

The model’s accuracy did not differ based on the race or ethnicity of study participants. The model also performed consistently across different U.S. counties, which suggests that it may be useful in addressing structural barriers such as geographic differences in medical record-keeping. And it was able to discriminate between autism and various psychiatric conditions with an accuracy of more than 90 percent at ages 2 to 2 and a half years.

Of the 17 categories of comorbidities, infections and immunological disorders were the most predictive of autism, the study shows.

Clinical applications:

“I definitely haven’t seen anyone approach this problem from this angle,” Frazier says. In his view, the tool would be best used as a complement to current screening approaches. “If you had an algorithm that was incredibly cheap to implement and spat out a probability score that could integrate with the M-CHAT findings and fit into the primary care workflow, that could be useful.”

Clinical implementation might involve having “a panel of comorbidities that could be screened for at every visit,” says Dwight German, professor of psychiatry at the University of Texas Southwestern in Dallas.

The researchers’ major next step involves running a prospective clinical trial to “compare our tool with existing tools to see whether we can cut down on false positives and reduce the [diagnostic] delay,” says Chattopadhyay.

Clinical studies are key to validate the approach and answer questions that remain about the algorithm — including how effectively the model can distinguish between autism and developmental conditions it is frequently confused with. And because the accuracy of the model peaks after children reach age 2, there is some concern that it may not be able to flag children any earlier than doctors can.

“Especially in severe cases, behavioral changes would be quite noticeable to a general practitioner by that age,” German says. Frazier adds that he would like to see how well the model is able to identify autistic children who have low support needs.

Chattopadhyay and his colleagues also plan to assess the tool to screen for a range of conditions, he says. “This is a new class of algorithms for analyzing patient data that leverages comorbidities and medical history and seems to produce, in the case of autism and even other disorders that we’re looking at, clinically relevant predictive performance.”

Cite this article: https://doi.org/10.53053/NALU6283