Taylor & Francis Group
Browse
1/1
2 files

Contextual speech rate influences morphosyntactic prediction and integration

dataset
posted on 2019-12-18, 02:25 authored by Greta Kaufeld, Wibke Naumann, Antje S. Meyer, Hans Rutger Bosker, Andrea E. Martin

Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we interpret as evidence of perceptual inference and the generation of predictions about upcoming morphosyntactic information. Additionally, we observed that early rate effects remained active in the presence of later conflicting lexical information. This result demonstrates that (1) contextual speech rate functions as a cue to morphosyntactic inferences, even in the presence of subsequent disambiguating information; and (2) listeners iteratively use multiple sources of information to draw inferences and generate predictions during speech comprehension. We discuss the implication of these demonstrations for theories of language processing.

Funding

AEM was funded by the Netherlands Organization for Scientific Research (grant 016.Vidi.188.029) and by a Max Planck Research Group Award from the Max Planck Gesellschaft.

History