“Eyes alive” by Lee, Badler and Badler

  • ©Sooha Park Lee, Jeremy B. Badler, and Norman I. Badler

  • ©Sooha Park Lee, Jeremy B. Badler, and Norman I. Badler

Conference:


Type:


Title:

    Eyes alive

Presenter(s)/Author(s):



Abstract:


    For an animated human face model to appear natural it should produce eye movements consistent with human ocular behavior. During face-to-face conversational interactions, eyes exhibit conversational turn-taking and agent thought processes through gaze direction, saccades, and scan patterns. We have implemented an eye movement model based on empirical models of saccades and statistical models of eye-tracking data. Face animations using stationary eyes, eyes with random saccades only, and eyes with statistically derived saccades are compared, to evaluate whether they appear natural and effective while communicating.

References:


    1. ARGYLE, M., AND COOK, M. 1976. Gaze and Mutual Gaze. Cambridge University Press, London.Google Scholar
    2. ARGYLE, M., AND DEAN, J. 1965. Eye-contact, distance and affiliation. Sociometry, 28, 289-304.Google Scholar
    3. BAHILL, A., ANDLER, D., AND STARK, L. 1975. Most naturally occuring human saccades have magnitudes of 15 deg or less. In Investigative Ophthalmol., 468-469.Google Scholar
    4. BECKER, W. 1989. Metrics, chapter 2. In The Neurobiology of Saccadic Eye Movements, R H Wurtz and M E Goldberg (eds), 13-67.Google Scholar
    5. BEELER, G. W. 1965. Stochastic processes in the human eye movement control system. PhD thesis, California Institute of Technology, Pasadena, CA.Google Scholar
    6. BIZZI, E. 1972. Central programming and peripheral feedback during eye-head coordination in monkeys. In Bibl. Ophthal. 82, 220-232.Google Scholar
    7. BLANZ, V., AND VETTER, T. 1999. A morphable model for the synthesis of 3D faces. In Computer Graphics (SIGGRAPH ’99 Proceedings), 75-84. Google Scholar
    8. BRAND, M. 1999. Voice puppetry. In Computer Graphics (SIGGRAPH ’99 Proceedings, 21-28. Google Scholar
    9. CANNY, J. 1986. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-8, 679-698. Google Scholar
    10. CASSELL, J., PELACHAUD, C., BADLER, N., STEEDMAN, M., ACHORN, B., BECHET, T., DOUVILLE, B., PREVOST, S., AND STONE, M. 1994. Animated conversation: Rule-based generation of facial expression gesture and spoken intonation for multiple converstaional agents. In Computer Graphics (SIGGRAPH ’94 Proceedings), 413-420. Google Scholar
    11. CASSELL, J., TORRES, O., AND PREVOST, S. 1999. Turn taking vs. discourse structure: How best to model multimodal conversation. In In Machine Conversations, Y. Wilks (eds), 143-154.Google Scholar
    12. CASSELL, J., VILHJALMSSON, H., AND BICKMORE, T. 2001. BEAT:The Behavior Expression Animation Toolkit. In Computer Graphics (SIGGRAPH ’01 Proceedings), 477-486. Google Scholar
    13. CHOPRA-KHULLAR, S., AND BADLER, N. 1999. Where to look? automating visual attending behaviors of virtual human characters. In Autonomous Agents Conf. Google Scholar
    14. COLBURN, R., COHEN, M., AND DRUCKER, S. 2000. Avatar mediated conversational interfaces. In Microsoft Technical Report.Google Scholar
    15. DECARLO, D., METAXAS, D., AND STONE, M. 1998. An anthropometric face model using variational techniques. In Computer Graphics (SIGGRAPH ’98 Proceedings), 67-74. Google Scholar
    16. DUNCAN, S. 1974. Some signals and rules for taking speaking turns in conversations. Oxford University Press, New York.Google Scholar
    17. ESSA, I., AND PENTLAND, A. 1995. Facial expression recognition using a dynamic model and motion energy. In ICCV95, 360-367. Google Scholar
    18. FAIGIN, G. 1990. The artist’s complete guide to facial expression. Watson-Guptill Publications, New York.Google Scholar
    19. GUENTER, B., GRIMM, C., AND WOOD, D. 1998. Making faces. In Computer Graphics (SIGGRAPH ’98 Proceedings), 55-66. Google Scholar
    20. ISO/IEC JTC 1/SC 29/WG11 N3055. Text for CD 14496-1 Systems MPEG-4 Manual. 1999.Google Scholar
    21. ISO/IEC JTC 1/SC 29/WG11 N3056. Text for CD 14496-2 Systems MPEG-4 Manual. 1999.Google Scholar
    22. KALRA, P., MANGILl, A., MAGNENAT-THALMANN, N., AND THALMANN, D. 1992. Simulation of muscle actions using rational free form deformations. In Proceedings Eurographics ’92 Computer Graphics Forum, Vol. 2, No. 3, 59-69.Google Scholar
    23. KENDON, A. 1967. Some functions of gaze direction in social interaction. Acta Psychologica 32, 1-25.Google Scholar
    24. LEE, Y., WATERS, K., AND TERZOPOULOS, D. 1995. Realistic modeling for facial animation. In Computer Graphics (SIGGRAPH ’95 Proceedings), 55-62. Google Scholar
    25. LEIGH, R., AND ZEE, D. 1991. The Neurology of Eye Movements, 2 ed. FA Davis.Google Scholar
    26. PARKE, F. 1974. Parametrized Models for Human Faces. PhD thesis, University of Utah. Google Scholar
    27. PELACHAUD, C., BADLER, N., AND STEEDMAN, M. 1996. Generating facial expressions for speech. Cognitive Science 20, 1, 1-46.Google Scholar
    28. PETAJAN, E. 1999. Very low bitrate face animation coding in MPEG-4. In Encyclopedia of Telecommunications, Volume 17, 209-231.Google Scholar
    29. PIGHIN, F., HECKER, J., LISCHINSKI, D., SZELISKI, R., AND SALESIN, D. 1998. Synthesizing realistic facial expressions from photographs. In Computer Graphics (SIGGRAPH ’98 Proceedings), 75-84. Google Scholar
    30. PLATT, S., AND BADLER, N. 1981. Animating facial expressions. In Computer Graphics (S1GGRAPH ’81 Proceedings), 279-288. Google Scholar
    31. VERTEGAAL, R., DER VEER, G. V., AND VONS, H. 2000. Effects of gaze on multiparty mediated communication. In Proceedings of Graphics Interface 2000, Morgan Kaufmann Publishers, Montreal,Canada: Canadian Human-Computer Communications Society, 95-102.Google Scholar
    32. VERTEGAAL, R., SLAGTER, R., DER VEER, G. V., AND NIJHOLT, A. 2000. Why conversational agents should catch the eye. In Summary of ACM CHI 2000 Conference on Human Factors in Computing Systems. Google Scholar
    33. VERTEGAAL, R., SLAGTER, R., DER VEER, G. V., AND NIJHOLT, A. 2001. Eye gaze patterns in conversations: There is more to conversational agents than meets the eyes. In ACM CHI 2001 Conference on Human Factors in Computing Systems, 301-308. Google Scholar
    34. WARABI, T. 1977. The reaction time of eye-head coordination in man. In Neurosci. Lett. 6, 47-51.Google Scholar
    35. WATERS, K. 1987. A muscle model for animating three-dimensional facial expression. In Computer Graphics (SIGGRAPH ’87 Proceedings), 17-24. Google Scholar


ACM Digital Library Publication:



Overview Page: