Somatosensory contribution to audio-visual speech processing - PCMD Accéder directement au contenu
Article Dans Une Revue Cortex Année : 2021

Somatosensory contribution to audio-visual speech processing

Résumé

Recent studies have demonstrated that the auditory speech perception of a listener can be modulated by somatosensory input applied to the facial skin suggesting that perception is an embodied process. However, speech perception is a multisensory process involving both the auditory and visual modalities. It is unknown whether and to what extent somatosensory stimulation to the facial skin modulates audio-visual speech perception. If speech perception is an embodied process, then somatosensory stimulation applied to the perceiver should influence audio-visual speech processing. Using the McGurk effect (the perceptual illusion that occurs when a sound is paired with the visual representation of a different sound, resulting in the perception of a third sound) we tested the prediction using a simple behavioral paradigm and at the neural level using event-related potentials (ERPs) and their cortical sources. We recorded ERPs from 64 scalp sites in response to congruent and incongruent audio-visual speech randomly presented with and without somatosensory stimulation associated with facial skin deformation. Subjects judged whether the production was/ba/or not under all stimulus conditions. In the congruent audio-visual condition subjects identifying the sound as/ba/, but not in the incongruent condition consistent with the McGurk effect. Concurrent somatosensory stimulation improved the ability of participants to more correctly identify the production as/ba/relative to the non-somatosensory condition in both congruent and incongruent conditions. ERP in response to the somatosensory stimulation for the incongruent condition reliably diverged 220 ms after stimulation onset. Cortical sources were estimated around the left anterior temporal gyrus, the right middle temporal gyrus, the right posterior superior temporal lobe and the right occipital region. The results demonstrate a clear multisensory convergence of somatosensory and audio-visual processing in both behavioral and neural processing consistent with the perspective that speech perception is a self-referenced, sensorimotor process.
Fichier principal
Vignette du fichier
ito_bodyRall.pdf (319.33 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03320604 , version 1 (06-12-2021)

Identifiants

Citer

Takayuki Ito, Hiroki Ohashi, Vincent L Gracco. Somatosensory contribution to audio-visual speech processing. Cortex, 2021, 143, pp.195-204. ⟨10.1016/j.cortex.2021.07.013⟩. ⟨hal-03320604⟩
45 Consultations
126 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More