“Ray tracing to get 3D fixations on VOIs from portable eye tracker videos” by Munn and Pelz

  • ©Susan M. Munn and Jeff Pelz

Conference:


Type(s):


Title:

    Ray tracing to get 3D fixations on VOIs from portable eye tracker videos

Presenter(s)/Author(s):



Abstract:


    Our portable video-based monocular eye tracker contains a headgear with two cameras that capture videos of the observer’s right eye and the scene from the observer’s perspective (Figure 1a). With this eye tracker, we typically obtain a position — that represents the observer’s point of regard (POR) — in each frame of the scene video (Figure 1b without bottom left box). These POR positions are in the image coordinate system of the scene camera, which moves with the observer’s head. Therefore, these POR positions do not tell us where the person is looking in an exocentric reference frame. Currently, the videos are analyzed manually by examining each frame. In short, we aim to automatically determine how long the observer spends fixating specific objects in the scene and in what order these objects are fixated.

References:


    1. Bouguet, J., 2007. Camera Calibration Toolbox for Matlab®.
    2. Hartley, R., and Zisserman, A. 2004. Multiple View Geometry, 2nd ed. Cambridge University Press.
    3. Munn, S. M., and Pelz, J. B. 2009. FixTag: An algorithm for identifying and tagging fixations to simplify the analysis of data collected by portable eye trackers. Transactions on Applied Perception, Special Issue on APGV, In press.
    4. Munn, S. M., and Pelz, J. B. 2009. Simple routines to improve feature tracks. In International Conference on Artificial Intelligence and Pattern Recognition (AIPR-09), In press.


ACM Digital Library Publication:



Overview Page: