THIS ARTICLE IS MORE THAN FIVE YEARS OLD
This article is more than five years old. Autism research — and science in general — is constantly evolving, so older articles may contain information or theories that have been reevaluated since their original publication date.
In February, the Simons Foundation Autism Research Initiative hosted a workshop to weigh the benefits and challenges of using online tools — such as cognitive tests and behavioral questionnaires — to collect data from children with autism and their parents or teachers.
Compared with clinical evaluations, online assessments are cheaper, more convenient and more easily scalable to large populations. However, concerns over their validity and consistency persist.
Much of the debate about online assessment hinges on the reliability of remote environments. No trained clinician is present to ensure the evaluations are conducted as designed, and the data generated can’t be used for official diagnosis. On the other hand, some individuals with autism get anxious in a hospital environment, making accurate clinical evaluations difficult.
New analyses discussed at the workshop suggest that some online tools are more accurate than scientists had expected, and the results align with data collected via more traditional clinical approaches.
What do you think?
- Are tests conducted by parents and teachers in remote environments necessarily less reliable than those conducted in the clinic? How can we better assess this?
- Outside the controlled environment of a clinical testing site, respondents sometimes experience questionnaire fatigue and fail to complete long or cognitively demanding tests. How can existing questionnaires be adapted to reduce redundant queries? How might unconventional formats, such as gaming platforms, be employed to incentivize engagement among children?
- One of the key motivations for parents to participate in autism studies is getting access to their children’s test results. However, current research guidelines typically don’t allow the sharing of results from online evaluations conducted in the absence of a clinician. How can policy guidelines be structured to allow for responsible sharing of online test results?
Share your thoughts in the comments section below. Or join the Online phenotyping discussion on the SFARI Forum for researchers to weigh in on these issues and help identify next steps for moving these new tools forward.
Not yet a member? Learn how to register here.
Follow us on Twitter: @SFARIcommunity