“High performance imaging using large camera arrays” by Wilburn, Joshi, Vaish, Talvala, Antunez, et al. …

  • ©Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz, and Marc Levoy




    High performance imaging using large camera arrays



    The advent of inexpensive digital image sensors and the ability to create photographs that combine information from a number of sensed images are changing the way we think about photography. In this paper, we describe a unique array of 100 custom video cameras that we have built, and we summarize our experiences using this array in a range of imaging applications. Our goal was to explore the capabilities of a system that would be inexpensive to produce in the future. With this in mind, we used simple cameras, lenses, and mountings, and we assumed that processing large numbers of images would eventually be easy and cheap. The applications we have explored include approximating a conventional single center of projection video camera with high performance along one or more axes, such as resolution, dynamic range, frame rate, and/or large aperture, and using multiple cameras to approximate a video camera with a large synthetic aperture. This permits us to capture a video light field, to which we can apply spatiotemporal view interpolation algorithms in order to digitally simulate time dilation and camera motion. It also permits us to create video sequences using custom non-uniform synthetic apertures.


    1. Anderson, D. 1999. FireWire System Architecture, Second Edition. Mindshare, Inc. Google ScholarDigital Library
    2. Bayer, B., 1976. Color imaging array. U. S. Patent 3,971,065.Google Scholar
    3. Black, M., and Anandan, P. 1993. A framework for the robust estimation of optical flow. In Proc. ICCV 1993, 231–236.Google Scholar
    4. Brown, M., and Lowe, D. 2003. Recognizing panoramas. In Proc. ICCV, 1218–1225. Google ScholarDigital Library
    5. Chai, J.-X., Tong, X., Chan, S.-C., and Shum, H.-Y. 2000. Plenoptic sampling. In Proc. SIGGRAPH 2000, 307–318. Google ScholarDigital Library
    6. Debevec, P. E., and Malik, J. 1997. Recovering high dynamic range radiance maps from photographs. In Proc. SIGGRAPH 1997, 369–378. Google ScholarDigital Library
    7. Gortler, S., Grzeszczuk, R., Szeliski, R., and Cohen, M. 1996. The lumigraph. In Proc. SIGGRAPH 1996, 43–54. Google ScholarDigital Library
    8. Isaksen, A., McMillan, L., and Gortler, S. 2000. Dynamically reparametrized light fields. In Proc. SIGGRAPH 2000, 297–306. Google ScholarDigital Library
    9. Kang, S., Uyttendaele, M., Winder, S., and Szeliski, R. 2003. High dynamic range video. In Proc. SIGGRAPH 2003, 319–325. Google ScholarDigital Library
    10. Levoy, M., and Hanrahan, P. 1996. Light field rendering. In Proc. SIGGRAPH 1996, 31–42. Google ScholarDigital Library
    11. Matusik, W., Buehler, C., Raskar, R., Gortler, S., and McMillan, L. 2000. Image-based visual hulls. In Proc. SIGGRAPH 2000, 369–374. Google ScholarDigital Library
    12. Rander, P., Narayanan, P., and Kanade, T. 1997. Virtualized reality: Constructing time-varying virtual worlds from real events. In Proceedings of IEEE Visualization, 277–283. Google ScholarDigital Library
    13. Schechner, Y., and Nayar, S. 2001. Generalized mosaicing. In Proc. ICCV 2001, 17–24.Google Scholar
    14. Shechtman, E., Caspi, Y., and Irani, M. 2002. Increasing space-time resolution in video sequences. In Proc. ECCV 2002, 753–768. Google ScholarDigital Library
    15. S. Mann, and R. W. Picard. 1994. Being ‘undigital’ with digital cameras: Extending dynamic range by combining differently exposed pictures. Tech. Rep. 323, M.I.T. Media Lab Perceptual Computing Section, Boston, Massachusetts. Also appears, IS&T’s 48th annual conference, Cambridge, Massachusetts, May 1995.Google Scholar
    16. Stewart, J., Yu, J., Gortler, S., and McMillan, L. 2003. A new reconstruction filter for undersampled light fields. In Eurographics Symposium on Rendering (EGSR), 150–156. Google ScholarDigital Library
    17. Szeliski, R. 1994. Image mosaicing for tele-reality applications. In WACV 1994, 44–53.Google ScholarCross Ref
    18. Tao, H., Sawhney, H., and Kumar, R. 2001. A global matching framework for stereo computation. In Proc. ICCV 2001, 532–539.Google Scholar
    19. Taylor, D. 1996. Virtual camera movement: The way of the future? American Cinematographer 77, 9 (September), 93–100.Google Scholar
    20. Vaish, V., Wilburn, B., Joshi, N., and Levoy, M. 2004. Using plane + parallax for calibrating dense camera arrays. In Proc. CVPR 2004, 2–9.Google Scholar
    21. Vaish, V., Garg, G., Talvala, E., Antunez, E., Wilburn, B., Horowitz, M., and Levoy, M. 2005. Synthetic aperture focusing using a shear-warp factorization of the viewing transform. In Proc. A3DISS 2005. Google ScholarDigital Library
    22. Wilburn, B., Smulski, M., Lee, H., and Horowitz, M. 2002. The light field video camera. In Media Processors 2002, vol. 4674 of Proc. SPIE, 29–36.Google Scholar
    23. Wilburn, B., Joshi, N., Vaish, V., Levoy, M., and Horowitz, M. 2004. High speed video using a dense array of cameras. In Proc. CVPR 2004, 294–301. Google ScholarDigital Library
    24. Yang, J., Everett, M., Buehler, C., and McMillan, L. 2002. A real-time distributed light field camera. In Eurographics Workshop on Rendering, 1–10. Google ScholarDigital Library
    25. Zhang, C., and Chen, T. 2004. A self-reconfigurable camera array. In Eurographics Symposium on Rendering, 243–254. Google ScholarDigital Library
    26. Zhang, C., and Chen, T. 2004. View-dependent non-uniform sampling for image-based rendering. In Proc. ICIP 2004, 2471–2474.Google Scholar
    27. Zhang, Y., and Kambhamettu, C. 2001. On 3d scene flow and structure estimation. In Proc. CVPR 2001, 778–785.Google Scholar
    28. Zitnick, C., Kang, S., Uyttendaele, M., Winder, S., and Szeliski, R. 2004. High-quality video view interpolation using a layered representation. In Proc. SIGGRAPH 2004, 600–608. Google ScholarDigital Library

ACM Digital Library Publication: