Loading...
Please wait, while we are loading the content...
Similar Documents
Eye Movements During Comprehension of Spoken Scene Descriptions
| Content Provider | Semantic Scholar |
|---|---|
| Author | Spivey, Michael J. Tyler, Melinda J. Richardson, Daniel C. Young, Ezekiel E. |
| Copyright Year | 2000 |
| Abstract | Eye Movements During Comprehension of Spoken Scene Descriptions Michael J. Spivey Daniel C. Richardson (spivey@cornell.edu) (dcr18@cornell.edu) Melinda J. Tyler Ezekiel E. Young (mjt15@cornell.edu) (eey1@cornell.edu) Department of Psychology Cornell University Ithaca, NY 14853 USA Abstract A recent eyetracking experiment has indicated that, while staring at a blank white display, participants engaged in imagery tend to make eye movements that mimic the directionality of spatial expressions in the speech stream (Spivey & Geng, 2000). This result is consistent with a spatial mental models account of language comprehension (e.g., Johnson-Laird, 1983), adds a motor component to evidence for activation of perceptual mechanisms during visual imagery (e.g., Kosslyn, Thompson, Kim, & Alpert, 1995), and fits with claims regarding the embodiment of cognition (e.g., Varela, Thompson, & Rosch, 1991). However, some methodological concerns remain. We report some preliminary observations, and a controlled experiment, in which these methodological concerns are resolved. We demonstrate that, even when the speech includes no instructions to imagine anything, and even when participants’ eyes are closed, participants tend to make eye movements in the same direction (and especially along the same axis) as the described scene when listening to a spatially extended scene description.. Introduction More than three decades ago, Donald O. Hebb (1968) suggested that the very same eye-movement scanpaths associated with viewing an object may be automatically triggered (via transcortical cell assemblies) when a person is imagining that object -- and some empirical support for this claim has recently been reported. When viewing a blank screen and being instructed to imagine a previously-viewed block pattern, observers produced scanpaths that bore some resemblance to the scanpaths elicited during original viewing of the actual block pattern (Brandt & Stark, 1997). Such oculomotor behavior in the absence of visual input is consistent with the notion that, when imagining or remembering an object or event, we often develop a mental representation of that object or event that has a distinctly spatial structure to it This spatial format of representation is thus able to take advantage of properties inherent to Cartesian space, such as topography and metric relationships. During the construction and interrogation of such spatial mental models (e.g., Bower & Morrow, 1990; Bryant, 1997; Johnson-Laird, 1983, 1996), cognition often uses linguistic input to activate memory representations, and imagery may then use those memory representations to partially activate perceptual representations (e.g., Farah, 1995; Finke, 1986; Kosslyn et al., 1995). The present study demonstrates that, even in the absence of any visual stimulus at all, such perceptual simulations (Barsalou, 1999) often trigger corresponding oculomotor responses. In a sense, one might say that thinking of something often involves pretending to look at it. This finding contributes to the developing embodied view of the mind (e.g., Ballard, Hayhoe, Pook, & Rao, 1997; Brooks, 1995; Varela, Thompson, & Rosch, 1991), in which an adequate characterization of cognition requires special attention to the repertoire of actions available to the organism or agent. Looking at Objects that Aren't There In a recent study, Spivey and Geng (2000, Experiment 1) had participants simply listen to pre-recorded instructions to imagine visual scenes while looking at a blank white projection screen and wearing a headband-mounted eyetracker. Each of the descriptions had a specific directionality (rightward, leftward, upward, and downward) to the manner in which new objects or events were introduced in the scene. In addition, a control scene description was presented, in which no particular directionality was present. Pilot results with this methodology produced eye movement patterns very much in accordance with the directionality of the scene description, however most participants developed rather accurate suspicions of our experimental hypothesis. Although eye movements are relatively automatic, and usually not very susceptible voluntary control, the concern remained that participants may not have produced such behavior if they hadn't known that their eye movements were being recorded. To avoid potential strategy effects, we introduced a sham task (of following instructions to move objects around on a table), and referred to the imagery session as a break from the experiment during which the eyetracker would be turned off ( but don't take off the headband because then we'd have to recalibrate the tracker when we return to the experiment ), Although two participants suspected that their eyes were still being tracked, and two participants closed their eyes during the imagery session, the remaining six participants produced eye movement patterns that were remarkably consistent with the directionality of the scene descriptions. Figure 1 shows example data from the Control (left panel) and Rightward (right panel) scene descriptions. |
| File Format | PDF HTM / HTML |
| Volume Number | 22 |
| Alternate Webpage(s) | https://cloudfront.escholarship.org/dist/prd/content/qt7z34j8zw/qt7z34j8zw.pdf?t=op35eh |
| Alternate Webpage(s) | http://eyethink.org/resources/lab_papers/Spivey2000_Eye_movements_duri.pdf |
| Alternate Webpage(s) | http://www.eyethink.org/resources/lab_papers/Spivey2000_Eye_movements_duri.pdf |
| Alternate Webpage(s) | http://www.rdg.ac.uk/eyethink/publications_assets/spiveyetal2000.pdf |
| Alternate Webpage(s) | http://www.cis.upenn.edu/~ircs/cogsci2000/PRCDNGS/SPRCDNGS/PAPERS/SPRITYYO.PDF |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |