A distinct region of the social brain is activated when viewing interactions between two people, according to unpublished research presented Sunday at the Society for Neuroscience annual meeting in New Orleans.
Based on functional magnetic resonance imaging, the study shows that one area of the posterior superior temporal sulcus (pSTS), a part of the brain involved in forming impressions of other people, responds specifically to video clips of two stick figures interacting with each other.
The findings may shed light on the neurological basis of the social difficulties in autism. “People with autism have a significant deficit in understanding of social interactions they take part in but also those they observe,” says Kami Koldewyn, a postdoctoral researcher in the laboratory of Nancy Kanwisher at the Massachusetts Institute of Technology, who presented the work.
Little is known about how and in which brain regions healthy people process different types of social information, such as moving bodies or facial expressions. “There’s a lot of controversy in the field about whether these social regions are different blobs, or just one blob doing different things,” says Koldewyn.
Koldewyn and her colleagues scanned the brains of 20 healthy adults while they viewed point-light displays, short videos of animated stick figures. These movies are commonly used in studies of biological motion, the movement of bodies rather than objects.
In 18 of the participants, the researchers identified a distinct portion of the pSTS that responds to movies of two figures interacting with each other. The region shows much less activation when the two figures are moving independently — one making walking motions and the other swinging a golf club — or when they make the same motions as when interacting, but facing away from each other.
The results support the idea of separate ‘blobs’ — the idea that different subregions of the brain process different types of social information.
The different functional areas aren’t completely discrete, though. The interaction region overlaps with an area previously identified as being involved in facial recognition. “Some people have a big face blob and a small interaction blob,” whereas others have the opposite, Koldewyn says.
On average, these two areas overlap by about 25 percent. Excluding the overlap results in an even more dramatic pattern. One cluster of neurons is uniquely responsive to interaction, and another cluster to faces.
“You really can spatially spread them apart,” says Joseph DeSouza, assistant professor of systems neuroscience at York University in Toronto, Canada, who was not involved in the work.
Koldewyn aims to scan people with autism to study what happens in this region when they watch social interactions.
She is also investigating whether still photographs of interactions between people or interactions between objects, represented by animations of moving dots, activate the region in neurotypical individuals.
Preliminary data suggest that the region doesn’t respond strongly to interactions between objects. “It wants the human information,” Koldewyn says.
For more reports from the 2012 Society for Neuroscience annual meeting, please click here.