THIS ARTICLE IS MORE THAN FIVE YEARS OLD
This article is more than five years old. Autism research — and science in general — is constantly evolving, so older articles may contain information or theories that have been reevaluated since their original publication date.
A five-minute online questionnaire can diagnose autism with as much accuracy as the so-called gold standard diagnostic tests, according to unpublished findings presented Tuesday at the Autism Consortium 2011 symposium in Boston.
The questionnaire is based on artificial intelligence, according to lead investigator Dennis Wall, director of the Computational Biology Initiative at Harvard Medical School. Tested and validated on more than 6,000 individuals with autism, the test is 99 to 100 percent in agreement with a clinician’s diagnosis in detecting autism, and 92 percent accurate in identifying people who do not have the disorder, Wall reported.
Despite the numbers, the tool would be applied to pre-screening, not for providing a formal diagnosis, Wall notes. “So far we’ve seen incredibly promising accuracy, including perfect sensitivity and near-perfect specificity in diagnosis.”
The two leading clinical instruments for diagnosing autism are the Autism Diagnostic Interview-Revised, or ADI-R, which consists of 93 questions, and the Autism Diagnostic Observation Schedule (ADOS), which typically involves a clinician observing the child for 29 specific behaviors. Each of these tests can take up to three hours.
The time required of a skilled clinician for diagnosis is one big reason that children in the United States are diagnosed with autism at an average age of 5.7 years, Wall says. More than a quarter of children with autism remain undiagnosed at 8 years.
Clinicians also tend to be heavily concentrated in large urban centers, and especially on the two coasts, leaving families in rural areas and in the middle of the country with longer delays and costs in receiving a diagnosis. “We need earlier diagnosis for more children, and in a more widespread way,” Wall says.
For the new tool, Wall, an expert in finding ways to mine biomedical data, turned to ‘machine-learning’ software — a type of artificial intelligence technology that can discern patterns in data in a way that allows it to mimic the decisions of a trained expert.
Wall’s group first fed data into his software from nearly 1,600children with autism enrolled in the Autism Genetic Resource Exchange (AGRE), a database of genetic and diagnostic information on families that have two children with autism.
The data included information collected during the ADI-R or the ADOS evaluation, along with whether the child had an autism diagnosis from a clinician. This information ‘trained’ the software in recognizing which data tend to lead to a positive diagnosis, and which to a negative one.
He then validated the test against nearly 2,000 children from the Simons Simplex Collection (SSC), a database of families that have one child with autism and unaffected parents and siblings, and 424 individuals from the Autism Consortium. The SSC is funded by SFARI.org’s parent organization.
The software is in near-perfect agreement with clinicians on positive diagnoses. It is less slightly less specific at 92 percent, meaning that it wrongly diagnoses autism in some people who are unaffected.
Still, the results are surprisingly accurate, say experts.
“We have to improve the precision with which we assess people for autism, and provide much better, wider access to assessment,” says William Barbaresi, associate chief of developmental medicine at Children’s Hospital Boston. “This work describes an incredibly elegant approach to accomplishing that.”
Wall’s team found that to reach its diagnosis, the software primarily relies on answers to just 7 of the 93 questions in the ADI-R, and 8 questions to cover the 29 observation items in the ADOS. The 15 questions encompass the three core deficits of autism: language and communication, social interaction and playing with objects.
Based on these questions, the team developed an online questionnaire that a clinician or caregiver can complete in a mere five minutes. The researchers have set up a Facebook-based version of the tool that has since been used to assess some 2,000 people who had already been evaluated for autism by a clinician.
This tool came to the same conclusion as the clinician 98 percent of the time. “We’ve validated it to have high accuracy with subjects ranging in age from 13 months to 49 years,” says Wall. The tool has also proven accurate in detecting autism in people with various diagnoses, including Asperger syndrome, pervasive developmental disorder-not otherwise specified and classic autism.
Via YouTube, the researchers are also soliciting home videos of children who have been formally evaluated for autism. Trained researchers score the roughly three-minute videos for 15 questions. A software tool then provides an instant diagnosis, which is compared with the child’s formal diagnosis from a clinician.
“So far we’ve seen tremendously high accuracy,” Wall says. His group is also working on ways to have software automatically score the videos.
Wall is careful to emphasize the limitations of a rapid online diagnosis, however.
“These tools aren’t meant to be a replacement of clinical practices by any stretch of the imagination,” he says. “We see this as assisting clinicians and caregivers by providing them with a preliminary risk assessment, especially for children in remote, rural areas.”
This article has been amended from the original. It has been altered to correct the number and source of participants used for building and validating the tool.