“Augmenting physical avatars using projector-based illumination” by Bermano, Brüschweiler, Grundhöfer, Iwai, Bickel, et al. … – ACM SIGGRAPH HISTORY ARCHIVES

“Augmenting physical avatars using projector-based illumination” by Bermano, Brüschweiler, Grundhöfer, Iwai, Bickel, et al. …

  • 2013 SA Technical Papers_Bermano_Augmenting Physical Avatars using Projector-Based Illumination

Conference:


Type(s):


Title:

    Augmenting physical avatars using projector-based illumination

Session/Category Title:   Avatars


Presenter(s)/Author(s):



Abstract:


    Animated animatronic figures are a unique way to give physical presence to a character. However, their movement and expressions are often limited due to mechanical constraints. In this paper, we propose a complete process for augmenting physical avatars using projector-based illumination, significantly increasing their expressiveness. Given an input animation, the system decomposes the motion into low-frequency motion that can be physically reproduced by the animatronic head and high-frequency details that are added using projected shading. At the core is a spatio-temporal optimization process that compresses the motion in gradient space, ensuring faithful motion replay while respecting the physical limitations of the system. We also propose a complete multi-camera and projection system, including a novel defocused projection and subsurface scattering compensation scheme. The result of our system is a highly expressive physical avatar that features facial details and motion otherwise unattainable due to physical constraints.

References:


    1. Aliaga, D. G., Yeung, Y. H., Law, A., Sajadi, B., and Majumder, A. 2012. Fast high-resolution appearance editing using superimposed projections. ACM Trans. Graph. 31, 2, 13:1–13:13.
    2. Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. In ACM Trans. Graph., vol. 30, ACM, 75.
    3. Bickel, B., Kaufmann, P., Skouras, M., Thomaszewski, B., Bradley, D., Beeler, T., Jackson, P., Marschner, S., Matusik, W., and Gross, M. 2012. Physical face cloning. ACM Trans. Graph. 31, 4.
    4. Bimber, O., and Emmerling, A. 2006. Multifocal projection: A multiprojector technique for increasing focal depth. Trans. Visualization and Computer Graphics 12, 4, 658–667.
    5. Bimber, O., Iwai, D., Wetzstein, G., and Grundhöfer, A. 2007. The Visual Computing of Projector-Camera Systems. In Proc. Eurographics (State-of-the-Art Report), 23–46.
    6. Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proc. Computer graphics and interactive techniques, 187–194.
    7. Chuang, E., and Bregler, C. 2002. Performance driven facial animation using blendshape interpolation. Computer Science Technical Report, Stanford University 2, 2, 3.
    8. Debevec, P. E., and Malik, J. 1997. Recovering high dynamic range radiance maps from photographs. In Proc. of ACM SIGGRAPH, 369–378.
    9. D’Eon, E., and Irving, G. 2011. A quantized-diffusion model for rendering translucent materials. In ACM Trans. Graph., vol. 30, 56.
    10. Ekman, P., and Friesen, W. V. 1977. Facial action coding system.
    11. Ghosh, A., and Debevec, P. 2008. Estimating multi-layer scattering in faces using direct-indirect separation. In ACM SIGGRAPH 2008 talks, SIGGRAPH ’08, 2:1–2:1.
    12. Grosse, M., Wetzstein, G., Grundhöfer, A., and Bimber, O. 2010. Coded aperture projection. ACM Trans. Graph. 29, 3, 22:1–22:12.
    13. Grundhöfer, A. 2013. Practical non-linear photometric projector compensation. In 2nd Int. Workshop on Computational Cameras and Display.
    14. Hartley, R. I., and Zisserman, A. 2004. Multiple View Geometry in Computer Vision. Cambridge University Press.
    15. Harville, M., Culbertson, B., Sobel, I., Gelb, D., Fitzhugh, A., and Tanguay, D. 2006. Practical methods for geometric and photometric correction of tiled projector. In Computer Vision and Pattern Recognition Workshop, 5–5.
    16. Havaldar, P., Pighin, F., and Lewis, J. 2006. Performance driven facial animation. In ACM SIGGRAPH Courses.
    17. Ishiguro, H. 2006. Interactive humanoids and androids as ideal interfaces for humans. In Proc. International Conference on Intelligent user interfaces, 2–9.
    18. Kazhdan, M., Bolitho, M., and Hoppe, H. 2006. Poisson surface reconstruction. In Proc. SGP.
    19. Kuratate, T., Matsusaka, Y., Pierce, B., and Cheng, G. 2011. Mask-bot: A life-size robot head using talking head animation for human-robot communication. In Int. Conference on Humanoid Robots (Humanoids), 99–104.
    20. Law, A. J., Aliaga, D. G., Sajadi, B., Majumder, A., and Pizlo, Z. 2011. Perceptually based appearance modification for compliant appearance editing. Comput. Graph. Forum 30, 8, 2288–2300.
    21. Li, H., Sumner, R. W., and Pauly, M. 2008. Global correspondence optimization for non-rigid registration of depth scans. In Computer Graphics Forum, vol. 27, 1421–1430.
    22. Li, H., Weise, T., and Pauly, M. 2010. Example-based facial rigging. ACM Trans. Graph. 29, 4, 32.
    23. Lincoln, P., Welch, G., Nashel, A., Ilie, A., State, A., and Fuchs, H. 2009. Animatronic shader lamps avatars. In Proc. Int. Symposium on Mixed and Augmented Reality, 27–33.
    24. Lipman, Y., Sorkine, O., Levin, D., and Cohen-Or, D. 2005. Linear rotation-invariant coordinates for meshes. ACM Trans. Graph. 24, 3, 479–487.
    25. Misawa, K., Ishiguro, Y., and Rekimoto, J. 2012. Ma petite cherie: what are you looking at?: a small telepresence system to support remote collaborative work for intimate communication. In Proc. Augmented Human International Conference, ACM, New York, NY, USA, AH ’12, 17:1–17:5.
    26. Moubayed, S. A., Edlund, J., and Beskow, J. 2012. Taming mona lisa: Communicating gaze faithfully in 2d and 3d facial projections. ACM Trans. Interact. Intell. Syst. 1, 2 (Jan.).
    27. Nagase, M., Iwai, D., and Sato, K. 2011. Dynamic defocus and occlusion compensation of projected imagery by model-based optimal projector selection in multi-projection environment. Virtual Real. 15, 2–3 (June), 119–132.
    28. Nishio, S., Ishiguro, H., and Hagita, N. 2007. Humanoid Robots: New Developments. I-Tech, ch. Geminoid: Teleoperated Android of an Existing Person.
    29. Noh, J.-y., and Neumann, U. 2001. Expression cloning. In Proc. Conf. on Comp. Graph. and Int. Techniques, 277–288.
    30. Oyamada, Y., and Saito, H. 2008. Defocus blur correcting projector-camera system. In Proc. Int. Conference on Advanced Concepts for Intelligent Vision Systems, 453–464.
    31. Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., and Fuchs, H. 1998. The office of the future: A unified approach to image-based modeling and spatially immersive displays. In Proc. Conf. on Comp. Graph. and Int. Techniques, 179–188.
    32. Raskar, R., Welch, G., Low, K.-L., and Bandyopadhyay, D. 2001. Shader lamps: Animating real objects with image-based illumination. In Proc. Eurographics Workshop on Rendering Techniques, 89–102.
    33. Schaefer, S., McPhail, T., and Warren, J. 2006. Image deformation using moving least squares. In ACM Trans. Graph., vol. 25, 533–540.
    34. Sen, P., Chen, B., Garg, G., Marschner, S. R., Horowitz, M., Levoy, M., and Lensch, H. 2005. Dual photography. ACM Trans. Graph. 24, 3, 745–755.
    35. Seol, Y., Lewis, J., Seo, J., Choi, B., Anjyo, K., and Noh, J. 2012. Spacetime expression cloning for blendshapes. ACM Trans. Graph. 31, 2, 14.
    36. Sukthankar, R., Stockton, R. G., and Mullin, M. D. 2001. Smarter presentations: Exploiting homography in camera-projector systems. In Proc. Int. Conference on Computer Vision, vol. 1, IEEE, 247–253.
    37. Tena, J. R., Hamouz, M., Hilton, A., and Illingworth, J. 2006. A validated method for dense non-rigid 3d face registration. In Int. Conf. on Video and Signal Based Surveillance.
    38. Wang, Z., Bovik, A. C., Sheikh, H. R., and Simoncelli, E. P. 2004. Image quality assessment: From error visibility to structural similarity. Trans. on Image Processing 13, 4, 600–612.
    39. Wetzstein, G., and Bimber, O. 2007. Radiometric compensation through inverse light transport. In Computer Graphics and Applications, 2007. PG ’07., 391–399.
    40. Weyrich, T., Matusik, W., Pfister, H., Bickel, B., Donner, C., Tu, C., McAndless, J., Lee, J., Ngan, A., Jensen, H. W., et al. 2006. Analysis of human faces using a measurement-based skin reflectance model. ACM Trans. Graph. 25, 3, 1013–1024.
    41. Zhang, L., and Nayar, S. 2006. Projection defocus analysis for scene capture and image display. ACM Trans. Graph. 25, 3, 907–915.
    42. Zhang, Z. 2000. A flexible new technique for camera calibration. Trans. Pattern Anal. Mach. Intell. 22, 11 (Nov.), 1330–1334.


ACM Digital Library Publication:



Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org