Brain’s capability to synchronize voice sounds may be linked to understanding new words
- Country:
- United States
Recent researches on the synchronization of speech motor rhythms (the coordinated movements of the tongue, lips and jaw that build up the speech, with speech audio rhythms) show that some people's brains adapt spontaneously to align with the rhythm of the voices they hear, while others do not.
The in-depth researches have been done by the Institute of Neurosciences of the University of Barcelona (NeuroUB) and the Bellvitge Biomedical Research Institute (IDIBELL) and from the New York University (USA).
According to the research study in Nature Neuroscience, these patterns show differences in functional and structural aspects of the speech network in the brain, as well as the ability to understand new words. These findings would help to assess the speech and cognitive development in kids. The study is led by the ICREA professor Ruth de Diego-Balaguer and David Poeppel, from New York University (USA), counting on the participation of the UB researcher Joan Orpella and other experts from New York University.
Two different patterns
Humans are good at synchronizing body movements with sound, for instance, when we move our feet or the head to the rhythm of a song. This happens without trying, without having trained and it has even been proved in babies. Most of the current research on this field has focused on how body movements are encouraged by the rhythm of the music, but there is little known about how this synchronization works when it comes to speech.
Aiming to focus on the link between motor rhythms and speech audio signals, researchers designed an apparently easy task: for a minute, participants had to listen to a rhythmical sequence ("la" "di" "fum"...), and at the same time, they had to whisper "tah". The analysis and results showed an unexpected pattern: population is divided into two groups. While some people spontaneously synchronize whispers with that sequence (good synchronizers), others did not experience any effect from the external rhythm (bad synchronizers). "This effect is surprisingly strong and stable over time", says Ruth de Diego Balaguer, researcher at the Cognition and Brain Plasticity research group of IDIBELL and UB. "Actually, -she continues- we reproduced these patterns in more than 300 people under different conditions, for instance, we saw that the same good and bad synchronizers experienced the same if they conducted the same task on the same day, one week later, and one month later".
Differences in neural connections and behaviour
Given such different patterns, researchers studied whether these variations had implications in the brain organization and behaviour. To study physiological differences, they got data from magnetic resonances carried out on participants using a diffusion weighted MRI technique which enables the reconstruction of white matter fibres that connect the different brain regions. Results show that good synchronizers have more white matter in those pathways connecting speech-perception areas (listening) with speech-production areas (speaking).
They also conducted a magnetoencephalography protocol to record the neural activity among participants while these were passively listening to rhythmical sequences. The study shows good synchronizers showed more brain-stimulus alignment than the other group of people and did so in the brain area involved in speech motor planning. "This implies that those speech production-related areas related are also involved in the speech perception, which may help us monitor the rhythm of external voices", note the researcher.
Last, researchers checked differences in the behaviour of both groups. "We tested whether there were differences in learning new words they listened in continuous talk between a good and a bad synchronizer and we saw good synchronizers perform better than the other group", says Ruth de Diego Balaguer.
A methodology opening new ways for research
The experiment in this study could serve to characterize individual differences and promote language research. "This methodology can help to find effects that were hidden by grouping populations with different neural and behaviour attributes. Also, we think the use of this test could reinforce the early diagnostic of some pathologies (such as Alzheimer's, Parkinson's or Multiple Sclerosis) and help to assess the speech and cognitive development in kids", concludes the researcher.
Also Read: Study accounts alien species responsible for global extinctions since 1500
- READ MORE ON:
- Brain
- voice sounds
- Brain voice
- New York University