Loading...
Please wait, while we are loading the content...
Electrophysiological evidence for the enhancement of gesture-speech integration by linguistic predictability during multimodal discourse comprehension.
| Content Provider | Europe PMC |
|---|---|
| Author | Hintz, Florian Khoe, Yung Han Strauß, Antje Psomakas, Adam Johannes Alfredo Holler, Judith |
| Abstract | In face-to-face discourse, listeners exploit cues in the input to generate predictions about upcoming words. Moreover, in addition to speech, speakers produce a multitude of visual signals, such as iconic gestures, which listeners readily integrate with incoming words. Previous studies have shown that processing of target words is facilitated when these are embedded in predictable compared to non-predictable discourses and when accompanied by iconic compared to meaningless gestures. In the present study, we investigated the interaction of both factors. We recorded electroencephalogram from 60 Dutch adults while they were watching videos of an actress producing short discourses. The stimuli consisted of an introductory and a target sentence; the latter contained a target noun. Depending on the preceding discourse, the target noun was either predictable or not. Each target noun was paired with an iconic gesture and a gesture that did not convey meaning. In both conditions, gesture presentation in the video was timed such that the gesture stroke slightly preceded the onset of the spoken target by 130 ms. Our ERP analyses revealed independent facilitatory effects for predictable discourses and iconic gestures. However, the interactive effect of both factors demonstrated that target processing (i.e., gesture-speech integration) was facilitated most when targets were part of predictable discourses and accompanied by an iconic gesture. Our results thus suggest a strong intertwinement of linguistic predictability and non-verbal gesture processing where listeners exploit predictive discourse cues to pre-activate verbal and non-verbal representations of upcoming target words. |
| Related Links | https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC9949912&blobtype=pdf |
| ISSN | 15307026 |
| Journal | Cognitive, Affective & Behavioral Neuroscience [Cogn Affect Behav Neurosci] |
| Volume Number | 23 |
| DOI | 10.3758/s13415-023-01074-8 |
| PubMed Central reference number | PMC9949912 |
| Issue Number | 2 |
| PubMed reference number | 36823247 |
| e-ISSN | 1531135X |
| Language | English |
| Publisher | Springer US |
| Publisher Date | 2023-02-23 |
| Publisher Place | New York |
| Access Restriction | Open |
| Rights License | Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. © The Author(s) 2023 |
| Subject Keyword | Multimodal communication Language comprehension Gesture-speech integration Iconic co-speech gestures |
| Content Type | Text |
| Resource Type | Article |
| Subject | Cognitive Neuroscience Behavioral Neuroscience |