Jeff B. Pelz
Carlson Center for Imaging Science
Rochester Institute of Technology, USA
Semantic Analysis of Mobile Eye Tracking Data
Researchers using laboratory-based eye tracking systems now have access to sophisticated data analysis tools to reduce raw gaze data, but the huge data sets coming from wearable eye trackers cannot be analyzed with the same tools. The lack of constraints that make mobile systems such powerful tools prevent the analysis tools designed for static or tracked observers from working with freely moving observers.
Proposed solutions have included infrared markers hidden in the scene to provide reference points, Simultaneous Localization and Mapping (SLAM), and multi-view geometry techniques that build models from multiple views of a scene. These methods map fixations onto predefined or extracted 3D scene models, allowing traditional static-scene analysis tools to be used.
Another approach to analysis of mobile eye tracking data is to code fixations with semantically meaningful labels rather than mapping the fixations to fixed 3D locations. This offers two important advantages over the model-based methods; semantic mapping allows coding of dynamic scenes without the need to explicitly track objects, and it provides an inherently flexible and extensible object-based coding scheme.
About the speaker
Jeff B. Pelz is a professor of Imaging Science and Co-director of the Multidisciplinary Vision Research Laboratory at the Rochester Institute of Technology (RIT) in Rochester, NY, USA. He received a Ph.D. in Brain and Cognitive Science from the University of Rochester, where he began his work in gaze and behavior in the 1990s. His research has focused on the development and application of robust wearable eye tracking systems that allow the study of complex behavior in natural environments, and on data-analysis tools to handle the resulting datasets.