Three eye-tracking experiments using the ‘visual-world’ paradigm are described that explore the basis by which thematic dependencies can be evaluated in advance of linguistic input that unambiguously signals those dependencies. Following Altmann and Kamide (1999), who found that selectional information conveyed by a verb can be used to anticipate an upcoming Theme, we attempt to draw here a more precise picture of the basis for such anticipatory processing. Our data from two studies in English and one in Japanese suggest that (a) verb-based information is not limited to anticipating the immediately following (grammatical) object, but can also anticipate later occurring objects (e.g., Goals), (b) in combination with information conveyed by the verb, a pre-verbal argument (Agent) can constrain the anticipation of a subsequent Theme, and (c) in a head-final construction such as that typically found in Japanese, both syntactic and semantic constraints extracted from pre-verbal arguments can enable the anticipation, in effect, of a further forthcoming argument in the absence of their head (the verb). We suggest that such processing is the hallmark of an incremental processor that is able to draw on different sources of information (some non-linguistic) at the earliest possible opportunity to establish the fullest possible interpretation of the input at each moment in time.
Kamide, Y., Altmann, G. T. M., & Haywood, S. L. (2003). The time-course of prediction in incremental sentence processing: evidence from anticipatory eye movements. Journal of Memory and Language, 49(1), 133-156. https://doi.org/10.1016/S0749-596X(03)00023-8