Health

Brain waves influence how we hear words

Summary: A new study has found that brainwave timing shapes our perception of speech. Researchers have found that more likely sounds and words are perceived during less excitable brain wave phases, while less likely sounds and words are perceived in more excitable phases.

Using ambiguous speech stimuli and MEG recordings, they showed how neural timing affects language comprehension. This research has significant implications for theories of predictive coding in speech perception.

Highlights:

  1. Brain wave synchronization influences the perception of speech sounds and words.
  2. Probable sounds are perceived during less excitable brain wave phases.
  3. The results support the role of neural timing in language comprehension and predictive coding.

Source: Max Planck Institute

The timing of our brainwaves shapes how we perceive our environment. We are more likely to perceive events when their timing coincides with that of the relevant brain waves.

Lead scientist Sanne ten Oever and her co-authors sought to determine whether neural timing also shapes speech perception. Is the probability of speech sounds or words encoded in our brain waves and is this information used to recognize words?

The team first created ambiguous stimuli for sounds and words. For example, the initial sounds in dad And Georgia differ in probability: “d” is more common than “g”.

Dutch words That “that” and gun “hole” also differs in word frequency: That “that” is more common than gun “hole”. For each stimulus pair, the researchers created a spoken stimulus that was between.

Next, participants were exposed to each ambiguous stimulus and were asked to select what they thought they heard (e.g., That Or gun). The team used magnetoencephalography (MEG) to record the timing of brain waves.

Excitable phases

Researchers have found that brain waves bias perception toward more likely sounds or words when stimuli are presented in a less “excitable” brain wave phase. Perception was biased toward less likely sounds or words when stimuli were presented in a more “excitable” brain wave phase.

This means that both probability of an event and its Hourly influenced what people perceived. Brain regions classically associated with speech sounds versus word processing were sensitive to the likelihood of occurrence of sounds versus words. Computer modeling confirmed the relationship between neural timing and perception.

“We conclude that brain waves provide a temporal structure that improves the brain’s ability to predict and process speech based on the probability of linguistic units,” says Ten Oever.

“Predictable sounds and words have a lower activation threshold, and our brain waves reflect this. Knowledge of how likely something is, and What it’s (what phoneme or word) working hand in hand to create understanding of language.

Predictive coding

“Our study has important implications for theories of predictive coding,” adds lead author Andrea Martin.

“We show that the timing (or phase) of information processing has direct consequences on whether something is interpreted as a more or less likely event, determining the words or sounds we hear.

“In the fields of speech and language processing, emphasis has been placed on the neural communication role of neuronal oscillations. However, we show that phase coding properties are also used to interpret speech and recognize words.

About this news on speech processing and neuroscience research

Author: Anniek Corporaal
Source: Max Planck Institute
Contact: Anniek Corporaal – Max Planck Institute
Picture: Image is credited to Neuroscience News

Original research: Free access.
“Brain waves shape the words we hear” by Sanne ten Oever et al. PNAS


Abstract

Brain waves shape the words we hear

Neural oscillations reflect fluctuations in excitability, which biases the perception of ambiguous sensory inputs. Why this bias occurs is not yet fully understood.

We hypothesized that neuronal populations representing probable events are more sensitive and thus become active during earlier oscillatory phases, when the ensemble itself is less excitable.

Perception of ambiguous inputs presented during less excitable phases should therefore be biased toward frequent or predictable stimuli with lower activation thresholds.

Here we show such frequency bias in speech recognition using psychophysics, magnetoencephalography (MEG), and computational modeling.

With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word identification behavior based on phonemic and lexical frequencies, respectively. This finding was reproduced in a computer model.

These results demonstrate that oscillations provide a temporal ordering of neuronal activity based on the sensitivity of separable neuronal populations.

News Source : neurosciencenews.com
Gn Health

Back to top button