Skip to main content

Spectrum: Autism Research News

An animated mouse reenacts common behavioral experiments and can be used to train algorithms that automatically track lab animals’ movements. The approach could help researchers analyze mouse behavior more efficiently.

Researchers typically use video cameras to capture the behavior and movements of mice that model certain autism traits. They can then use machine-learning algorithms to automatically label and track specific body parts, such as a mouse’s fingers or spine.

Training the algorithms can be laborious, however. To train a widely used tracking tool called DeepLabCut, for example, scientists need to manually label specific points on an animal in about 100 to 200 still frames. Pooling data from multiple videos presents additional challenges and may make the training process even longer.

“If the lighting [is] different, or the camera angle, it might throw off some of the machine-learning tracking systems,” says Timothy Murphy, professor of psychiatry at the University of British Columbia in Vancouver, Canada, who led the new work. “We wanted to have a more generic scene.”

To make it easier to combine data from multiple videos, Murphy and his colleagues created an animated model of a mouse and used it to simulate videos of real mice. The animated videos could speed up the process of training algorithms, Murphy says, because researchers only have to label features of interest on the virtual mouse once.

The virtual mouse, replete with skin, fur and whiskers, reflects computerized tomography scans of three female mice. The team used artificial intelligence tools to make the animated mouse appear realistic and added slight variations to the videos — tweaking a mouse’s movements or altering the lighting, for instance — to diversify the training data.

Training simulation:

To test the approach, the team used an artificial video of an animated mouse running on a wheel. They trained a DeepLabCut algorithm to track 28 markers along the animated animal’s spine and left limbs. They then used the algorithm to analyze videos of real mice and assessed its performance. To compare the artificially trained algorithm with more traditional methods, the researchers also hand-labeled frames from real footage of a mouse.

The algorithm trained using synthetic videos performs about as well as manual approaches: Its accuracy is comparable to that of an algorithm trained using 200 hand-labeled frames, the researchers reported in April in Nature Methods. The artificially trained algorithm also produces few errors — the positions of markers are, on average, 6.7 pixels off compared with their hand-labeled locations.

The researchers used the artificial videos to track the position of specific body parts in 3D. Traditional approaches typically require several cameras set up at different angles to do the same.

With the virtual mouse, the researchers could estimate the 3D positions of body parts using only one camera view. And because the virtual mouse model inherently captures 3D information, researchers can readily translate the 2D position of a body marker to 3D.

The virtual mouse could help researchers train a variety of algorithms that analyze behavior, the researchers say. Creating an animated video takes time, but the approach may be particularly useful for researchers who want to track numerous body parts, or when several groups are using the same experimental setup.

Murphy and his colleagues say they plan to use the approach to look for subtle behavioral differences in mice that recapitulate autism traits.