“View-Dependent Texture Mapping of Video for Realistic Avatars in Collaborative Virtual Environments” by Rajan, Keenan, Sandin, DeFanti and Subramanian

  • ©Vivek Rajan, Damin M. Keenan, Daniel (Dan) J. Sandin, Thomas (Tom) A. DeFanti, and Satheesh G. Subramanian

Conference:


Type:


Interest Area:


    Gaming & Interactive

Title:

    View-Dependent Texture Mapping of Video for Realistic Avatars in Collaborative Virtual Environments

Session/Category Title:   Interaction Mechanisms


Presenter(s)/Author(s):



Abstract:


    This sketch presents how view-dependent texture mapping can be used to produce realistic avatars and, in the process, eliminate constraints posed by background and lighting requirements. A two-step approach is taken to achieve realistic 3D video avatars using projective texture mapping of video.

References:


    1. Cruz-Neira, C., Sandin, D.J., DeFanti, T.A., Kenyon, R.V., & Hart, J.C. (1992). The CAVE: Audio visual experience automatic virtual environment. Communications of the ACM, 35 (6), 65-72.
    2. Debevec, P.E., Taylor, C.J., & Malik, J. (1996). Modeling and rendering architecture from photographs: A hybrid geometry and imagebased approach. Proceedings of SIGGRAPH 96, 11-20.
    3. Tsai, R. (1986). An efficient and accurate camera calibration technique for 3D machine vision. In IEEE CVPR 1986.
    4. Haeberli, P. (1992). Fast shadows and lighting effects using texture mapping. Proceedings of SIGGRAPH 92.


Overview Page: