Abstract
The integration of visual and auditory cues is crucial for successful processing of speech, especially under adverse conditions. Recent reports have shown that when participants watch muted videos of speakers, the phonological information about the acoustic speech envelope, which is associated with but independent from the speakers' lip movements, is tracked by the visual cortex. However, the speech signal also carries richer acoustic details, for example, about the fundamental frequency and the resonant frequencies, whose visuophonological transformation could aid speech processing. Here, we investigated the neural basis of the visuo-phonological transformation processes of these more fine-grained acoustic details and assessed how they change as a function of age. We recorded whole-head magnetoencephalographic (MEG) data while the participants watched silent normal (i.e., natural) and reversed videos of a speaker and paid attention to their lip movements. We found that the visual cortex is able to track the unheard natural modulations of resonant frequencies (or formants) and the pitch (or fundamental frequency) linked to lip movements. Importantly, only the processing of natural unheard formants decreases significantly with age in the visual and also in the cingulate cortex. This is not the case for the processing of the unheard speech envelope, the fundamental frequency, or the purely visual information carried by lip movements. These results show that unheard spectral fine details (along with the unheard acoustic envelope) are transformed from a mere visual to a phonological representation. Aging affects especially the ability to derive spectral dynamics at formant frequencies. As listening in noisy environments should capitalize on the ability to track spectral fine details, our results provide a novel focus on compensatory processes in such challenging situations.
| Original language | English |
|---|---|
| Pages (from-to) | 4818-4833 |
| Number of pages | 16 |
| Journal | Cerebral Cortex |
| Volume | 32 |
| Issue number | 21 |
| Early online date | 22 Jan 2022 |
| DOIs | |
| Publication status | Published - 1 Nov 2022 |
Keywords
- low-frequency speech tracking
- MEG
- multisensory processing
- visual speech processing
ASJC Scopus subject areas
- Cellular and Molecular Neuroscience
- Cognitive Neuroscience
Fingerprint
Dive into the research topics of 'Cortical tracking of formant modulations derived from silently presented lip movements and its decline with age'. Together they form a unique fingerprint.Research output
- 9 Citations
- 1 Preprint
-
Cortical tracking of unheard formant modulations derived from silently presented lip movements and its decline with age
Suess, N. (Lead / Corresponding author), Hauswald, A., Reisinger, P., Rösch, S., Keitel, A. & Weisz, N., 11 May 2021, BioRxiv, 34 p.Research output: Working paper/Preprint › Preprint
Open AccessFile59 Downloads (Pure)
Datasets
-
Visual speech processing in prelingual deaf, postlingual deaf and normal hearing populations
Suess, N. (Creator), Hauswald, A. (Creator), Keitel, A. (Creator) & Weisz, N. (Creator), University of Dundee, 6 Nov 2019
DOI: https://osf.io/ndvf6/overview
Dataset
-
Silent lip reading MEG data
Benz, K. R. (Creator), Suess, N. (Creator), Hauswald, A. (Creator), Reisinger, P. (Creator), Rösch, S. (Creator), Keitel, A. (Creator) & Weisz, N. (Creator), Austrian NeuroCloud, 12 Mar 2025
DOI: 10.60817/yx20-a165, https://bids-datasets.data-pages.anc.plus.ac.at/auditory/ns_lipreading/
Dataset
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver