“Projection defocus analysis for scene capture and image display” by Zhang and Nayar

  • ©

Conference:


Type(s):


Title:

    Projection defocus analysis for scene capture and image display

Presenter(s)/Author(s):



Abstract:


    In order to produce bright images, projectors have large apertures and hence narrow depths of field. In this paper, we present methods for robust scene capture and enhanced image display based on projection defocus analysis. We model a projector’s defocus using a linear system. This model is used to develop a novel temporal defocus analysis method to recover depth at each camera pixel by estimating the parameters of its projection defocus kemel in frequency domain. Compared to most depth recovery methods, our approach is more accurate near depth discontinuities. Furthermore, by using a coaxial projector-camera system, we ensure that depth is computed at all camera pixels, without any missing parts. We show that the recovered scene geometry can be used for refocus synthesis and for depth-based image composition. Using the same projector defocus model and estimation technique, we also propose a defocus compensation method that filters a projection image in a spatially-varying, depth-dependent manner to minimize its defocus blur after it is projected onto the scene. This method effectively increases the depth of field of a projector without modifying its optics. Finally, we present an algorithm that exploits projector defocus to reduce the strong pixelation artifacts produced by digital projectors, while preserving the quality of the projected image. We have experimentally verified each of our methods using real scenes.

References:


    1. Bimber, O., and Emmerling, A. 2006. Multi-focal projection. IEEE Trans. on Visualization and Computer Graphics to appear.Google Scholar
    2. Bimber, O., Wetzstein, G., Emmerling, A., and Nitschke, C. 2005. Enabling view-dependent stereoscopic projection in real environments. In Proc. Int. Symp. on Mixed and Augmented Reality, 14–23. Google ScholarDigital Library
    3. Curless, B., and Levoy, M. 1995. Better optical triangulation through spacetime analysis. In Proc. Int. Conf. on Computer Vision, 987–994. Google ScholarDigital Library
    4. Davis, J., Nehab, D., Ramamoothi, R., and Rusinkiewicz, S. 2005. Space-time stereo: A unifying framework for depth from triangulation. IEEE Trans. on Pattern Analysis and Machine Intelligence 27, 2, 296–302. Google ScholarDigital Library
    5. Faugeras, O. 1993. Three-Dimensional Computer Vision. MIT Press. Google ScholarDigital Library
    6. Favaro, P., and Soatto, S. 2005. A geometric approach to shape from defocus. IEEE Trans. on Pattern Analysis and Machine Intelligence (in press). Google ScholarDigital Library
    7. Fujii, K., Grossberg, M., and Nayar, S. 2005. A Projector-Camera System with Real-Time Photometric Adaptation for Dynamic Environments. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 814–821. Google ScholarDigital Library
    8. Girod, B., and Scherock, S. 1989. Depth from defocus of structured light. In Proc. SPIE Conf. on Optics, Illumination, and Image Sensing for Machine Vision.Google Scholar
    9. Gonzales-Banos, H., and Davis, J. 2004. A method for computing depth under ambient illumination using multi-shuttered light. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 234–241. Google ScholarDigital Library
    10. Grossberg, M., Peri, H., Nayar, S., and Belhumeur, P. 2004. Making One Object Look Like Another: Controlling Appearance using a Projector-Camera System. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. I, 452–459.Google Scholar
    11. Horn, B., and Brooks, M. 1989. Shape from Shading. MIT Press. Google ScholarDigital Library
    12. Huang, P. S., Zhang, C. P., and Chiang, F. P. 2003. High speed 3-d shape measurement based on digital fringe projection. Optical Engineering 42, 1, 163–168.Google ScholarCross Ref
    13. Jain, A. K. 1989. Fundamentals of Digital Image Processing. Prentice Hall. Google ScholarDigital Library
    14. Jin, H., and Favaro, P. 2002. A variational approach to shape from defocus. In Proc. Eur. Conf. on Computer Vision, 18–30. Google ScholarDigital Library
    15. Kanade, T., Gruss, A., and Carley, L. 1991. A very fast vlsi rangefinder. In Proc. Int. Conf. on Robotics and Automation, vol. 39, 1322–1329.Google Scholar
    16. Koninckx, T. P., Peers, P., Dutr, P., and Gool, L. V. 2005. Scene-adapted structured light. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 611–619. Google ScholarDigital Library
    17. Levoy, M., Chen, B., Vaish, V., Horowitz, M., Mcdowall, I., and Bolas, M. 2004. Synthetic aperture confocal imaging. In SIGGRAPH Conference Proceedings, 825–834. Google ScholarDigital Library
    18. Ljung, L. 1998. System Identification: A Theory for the User, 2 ed. Prentice Hall. Google ScholarDigital Library
    19. Majumder, A., and Welch, G. 2001. Computer graphics optique: Optical superposition of projected computer graphics. In Proc. Eurographics Workshop on Virtual Enviroment/Immersive Projection Technology. Google ScholarDigital Library
    20. Mcguire, M., Matusik, W., Pfister, H., Hughes, J. F., and Durand, F. 2005. Defocus video matting. In SIGGRAPH Conference Proceedings, 567–576. Google ScholarDigital Library
    21. Nayar, S. K., and Nakagawa, Y. 1994. Shape from focus. IEEE Trans. on Pattern Analysis and Machine Intelligence 16, 8, 824–831. Google ScholarDigital Library
    22. Nayar, S. K., Watanabe, M., and Noguchi, M. 1996. Real-time focus range sensor. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 12, 1186–1198. Google ScholarDigital Library
    23. Nocedal, J., and Wright, S. J. 1999. Numerical Optimization. Springer.Google Scholar
    24. Oppenheim, A. V., and Willsky, A. S. 1997. Signals and Systems, 2 ed. Prentice Hall. Google ScholarDigital Library
    25. Pentland, A. 1987. A new sense for depth of field. IEEE Trans. on Pattern Analysis and Machine Intelligence 9, 4, 523–531. Google ScholarDigital Library
    26. Raj, A., and Zabih, R. 2005. A graph cut algorithm for generalized image deconvolution. In Proc. Int. Conf. on Computer Vision. Google ScholarDigital Library
    27. Rajagopalan, A. N., and Chaudhuri, S. 1997. A variational approach to recovering depth from defocused images. IEEE Trans. on Pattern Analysis and Machine Intelligence 19, 10, 1158–1164. Google ScholarDigital Library
    28. Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., and Fuchs, H. 1998. The office of the future: A unified approach to image-based modeling and spatially immersive displays. In SIGGRAPH Conference Proceedings, 179–188. Google ScholarDigital Library
    29. Raskar, R., Welch, G., Low, K., and Bandyopadhyay, D. 2001. Shader lamps. In Proc. Eurographics Workshop on Rendering.Google Scholar
    30. Raskar, R., Van Baar, J., Beardsley, P., Willwacher, T., Rao, S., and Forlines, C. 2003. ilamps: geometrically aware and self-configuring projectors. In SIGGRAPH Conference Proceedings, 809–818. Google ScholarDigital Library
    31. Raskar, R., Han Tan, K., Feris, R., Yu, J., and Turk, M. 2004. Non-photorealistic camera: Depth edge detection and stylized rendering using multi-flash imaging. In SIGGRAPH Conference Proceedings, 679–688. Google ScholarDigital Library
    32. Scharstein, D., and Szeliski, R. 2003. High-accuracy stereo depth maps using structured light. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 195–202. Google ScholarDigital Library
    33. Schechner, Y. Y., Kiryati, N., and Basri, R. 2000. Separation of transparent layers using focus. Int. J. on Computer Vision 39, 1, 25–39. Google ScholarDigital Library
    34. Sen, P., Chen, B., Garg, G., Marschner, S. R., Horowitz, M., Levoy, M., and Lensch, H. P. A. 2005. Dual photography. In SIGGRAPH Conference Proceedings, 745–755. Google ScholarDigital Library
    35. Tappen, M. F., Russell, B. C., and Freeman, W. T. 2004. Efficient graphical models for processing images. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 2, 673–680. Google ScholarDigital Library
    36. Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: High-resolution capture for modeling and animation. In ACM Annual Conference on Computer Graphics, 548–558. Google ScholarDigital Library
    37. Zhang, Z. 2000. A flexible new technique for camera calibration. IEEE Trans. on Pattern Analysis and Machine Intelligence 22, 11, 1330–1334. Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: