Imaging Science Seminar: Michele Cox
Imaging Science Seminar
The Ever-moving Eye Actively Controls Visual Input in Natural Tasks
Michele Cox, Ph.D.
Postdoctoral Research, Center for Visual Sciences, University of Rochester
The speaker will review theories of space-time encoding and models used to predict spatial scales where visual sensitivity can be amplified by motor behavior. I will then demonstrate how these models match behavior observed during a variety of everyday tasks such as sorting small objects, reading fine print, and finding a target in a crowded scene. Finally, I will discuss implications of motor control on eye development.
Abstract:
A popular view of the early visual system compares the eye to a camera, where the camera lens represents the eye’s optics and the film acts as the retina. In this analogy, the quality of the retinal image is constant in time and space, like a snapshot. Though an obvious simplification, this analogy matches our perceptual experience of a static, high-resolution, visual world. However, our eyes are inextricably linked to our motor functions, meaning that they are never still. In fact, even when holding our gaze steady on an object, our eyes move at a scale that would be readily apparent if it were an external object moving at the same scale. For a passive system, this movement might be considered “noise”, akin to a blurry image captured by a shaky camera. However, mounting evidence suggests that vision, like other senses, relies on movement to actively probe and encode the visual environment. In this talk, I will review theories of space-time encoding and models used to predict spatial scales where visual sensitivity can be amplified by motor behavior. I will then demonstrate how these models match behavior observed during a variety of everyday tasks such as sorting small objects, reading fine print, and finding a target in a crowded scene. Finally, I will discuss implications of motor control on eye development.
Speaker Bio:
Michele A. Cox is a postdoctoral researcher in the Center for Visual Sciences at University of Rochester. She earned a PhD from Vanderbilt University. Michele’s research interests revolve around how the visual system integrates sensory information with ongoing perceptual, motor, and cognitive processes. Her graduate work focused on the neural underpinnings of sensory integration in early visual cortex, leveraging large-scale, multielectrode intracranial electrophysiological recordings to disentangle feedforward and feedback signals in both cognitive and perceptual tasks. In her postdoctoral work, she is building on her previous experience to study sensory integration during natural viewing. Her research combines computational modeling and human psychophysics with real-time control of retinal stimulation to probe the spatiotemporal dynamics of visual integration in active observers.
Intended Audience:
Beginners, undergraduates, graduates, and experts. Those with interest in the topic.
Event Snapshot
When and Where
Who
Open to the Public
Interpreter Requested?
No