“Multi-aperture photography” by Green, Sun, Matusik and Durand

  • ©Paul Green, Wenyang Sun, Wojciech Matusik, and Frédo Durand

Conference:


Type:


Title:

    Multi-aperture photography

Presenter(s)/Author(s):



Abstract:


    The emergent field of computational photography is proving that, by coupling generalized imaging optics with software processing, the quality and flexibility of imaging systems can be increased. In this paper, we capture and manipulate multiple images of a scene taken with different aperture settings (f-numbers). We design and implement a prototype optical system and associated algorithms to capture four images of the scene in a single exposure, each taken with a different aperture setting. Our system can be used with commercially available DSLR cameras and photographic lenses without modification to either. We leverage the fact that defocus blur is a function of scene depth and f/# to estimate a depth map. We demonstrate several applications of our multi-aperture camera, such as post-exposure editing of the depth of field, including extrapolation beyond the physical limits of the lens, synthetic refocusing, and depth-guided deconvolution.

References:


    1. Adelson, E. H., and Wang, J. Y. A. 1992. Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine Intelligence 14, 2, 99–106. Google ScholarDigital Library
    2. Aggarwal, M., and Ahuja, N. 2004. Split aperture imaging for high dynamic range. Int. J. Comp. Vision 58, 1 (June), 7–17. Google ScholarDigital Library
    3. Boykov, Y., Veksler, O., and Zabih, R. 2001. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence 23, 11. Google ScholarDigital Library
    4. Chaudhuri, S., and Rajagopalan, A. 1999. Depth From Defocus: A Real Aperture Imaging Approach. Springer Verlag.Google Scholar
    5. Farid, H., and Simoncelli, E. 1996. A differential optical range camera. In Optical Society of America, Annual Meeting.Google Scholar
    6. Georgiev, T., Zheng, K. C., Curless, B., Salesin, D., Nayar, S., and Intwala, C. 2006. Spatio-angular resolution tradeoffs in integral photography. In Proceedings of Eurographics Symposium on Rendering (2006), 263–272. Google ScholarCross Ref
    7. Harvey, R. P., 1998. Optical beam splitter and electronic high speed camera incorporating such a beam splitter. US 5734507, United States Patent.Google Scholar
    8. Hasinoff, S., and Kutulakos, K. 2006. Confocal stereo. In European Conference on Computer Vision (2006), 620–634. Google ScholarDigital Library
    9. Hecht, E. 2002. Optics. Addison-Wesley, Reading, MA.Google Scholar
    10. Hiura, S., and Matsuyama, T. 1998. Depth measurement by the multi-focus camera. In IEEE Computer Vision and Pattern Recognition (1998), 953–961. Google ScholarDigital Library
    11. Isaksen, A., McMillan, L., and Gortler, S. J. 2000. Dynamically reparameterized light fields. In Proceedings of ACM SIGGRAPH 2000, Computer Graphics Proceedings, Annual Conference Series, 297–306. Google ScholarDigital Library
    12. Levin, A., Lischinski, D., and Weiss, Y. 2004. Colorization using optimization. ACM Transactions on Graphics 23, 3 (Aug.), 689–694. Google ScholarDigital Library
    13. McGuire, M., Matusik, W., Pfister, H., Hughes, J. F., and Durand, F. 2005. Defocus video matting. ACM Transactions on Graphics 24, 3 (Aug.), 567–576. Google ScholarDigital Library
    14. McGuire, M., Matusik, W., Chen, B., Hughes, J. F., Pfister, H., and Nayar, S. 2007. Optical splitting trees for high-precision monocular imaging. IEEE Computer Graphics and Applications (2007) (March). Google ScholarDigital Library
    15. Naemura, T., Yoshida, T., and Harashima, H. 2001. 3-D computer graphics based on integral photography. Opt. Expr. 8.Google Scholar
    16. Narasimhan, S., and Nayar, S. 2005. Enhancing resolution along multiple imaging dimensions using assorted pixels. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 4 (Apr), 518–530. Google ScholarDigital Library
    17. Ng, R. 2005. Fourier slice photography. ACM Transactions on Graphics 24, 3 (Aug.), 735–744. Google ScholarDigital Library
    18. Okano, F., Arai, J., Hoshino, H., and Yuyama, I. 1999. Three-dimensional video system based on integral photography. Optical Engineering 38 (June), 1072–1077.Google ScholarCross Ref
    19. Pentland, A. P. 1987. A new sense for depth of field. IEEE Transactions on Pattern Analysis and Machine Intelligence 9, 4, 523–531. Google ScholarDigital Library
    20. Ray, S. 1988. Applied photographic optics. Focal Press.Google Scholar
    21. Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M. F., and Rother, C. 2006. A comparative study of energy minimization methods for markov random fields. In European Conference on Computer Vision (2006), 16–29. Google ScholarDigital Library
    22. Watanabe, M., Nayar, S., and Noguchi, M. 1996. Real-time computation of depth from defocus. In Proceedings of The International Society for Optical Engineering (SPIE), vol. 2599, 14–25.Google Scholar


ACM Digital Library Publication:



Overview Page: