Loading...
Please wait, while we are loading the content...
Similar Documents
Computational models of human eye-movement behavior
| Content Provider | NASA Technical Reports Server (NTRS) |
|---|---|
| Author | Eckstein, Miguel Stone, Lee Beutter, Brent Lorenceau, Jea |
| Organization | The National Aeronautics and Space Administration (NASA) |
| Copyright Year | 2000 |
| Abstract | Humans interact with visual displays not by passively absorbing the information like a fixed camera, but by actively searching for areas with relevant information, and by following the motion of features of interest. The specific aim of this project is to develop and test computational models of human eye-movement control with particular emphasis on two types of eye-movement behaviors: search saccades and smooth pursuit. The overall goal is to incorporate the knowledge of eye-movement behavior acquired in our laboratory into computational models that can serve as design tools in the development of safer, more effective visual displays, interfaces, and training methods, matched to human abilities and limitations. Most current models of human vision have focused on the passive ability to detect, discriminate, or identify targets in noise in carefully controlled laboratory conditions in which eye movements are suppressed. However, when humans interact with a display during aerospace tasks (for example, air traffic controllers monitoring aircraft), their eyes jump from one location to another using rapid eye movements (saccades) to point central gaze, the region of highest resolution, at the current object of interest. This active search process greatly enhances visual performance. Two major categories of models have been proposed to explain search performance: guided-search and signal-detection-theory models. Unfortunately, the former category predicts reaction time, while the latter predicts localization accuracy. To enable a direct comparison of these two models, an extension to the guided-search model that allows it to predict localization performance was developed. Both models are being tested to determine which is the better predictor of human performance. When display targets move, humans generate smooth tracking eye movements (pursuit) to follow the motion of the current object of interest. This ability is crucial when using a display to perform tasks involving motion estimation (for example, landing the shuttle by aligning a moving target with a reference within a heads-up display). Current pursuit models assume that pursuit merely minimizes the physical image motion on the back of the eye (the retina). This study has demonstrated that this simple view cannot explain the full range of human pursuit behaviors, especially the ability to track targets accurately even when one s view is partially blocked. The new control framework for pursuit, shown schematically in the figure, is consistent with new behavioral data from this study, as well as what is known of the neurophysiology and anatomy of the primate brain. This research suggests that pursuit is driven by an estimate of target motion constructed at the highest level of the brain (the cortex) and shared with perception, rather than by the simple quasireflexive integration of lower-level retinal-error signals. |
| File Size | 2480009 |
| File Format | |
| Alternate Webpage(s) | http://www.archive.org/details/nasa_techdoc_20040077138 |
| Archival Resource Key | ark:/13960/t45q5t593 |
| Language | English |
| Publisher Date | 2000-12-01 |
| Publisher Institution | Ames Research Center |
| Access Restriction | Open |
| Subject Keyword | Exhaust Emission Atmospheric Effects Air Pollution Pollution Monitoring Airborne Equipment Aircraft Engines Nitrogen Oxides Ozone Hydroxides Ntrs Nasa Technical Reports Server (ntrs) Nasa Technical Reports Server Aerodynamics Aircraft Aerospace Engineering Aerospace Aeronautic Space Science |
| Content Type | Text |