“Plenoptic modeling: an image-based rendering system” by McMillan and Bishop

  • ©Leonard McMillan and Gary Bishop

Conference:


Type(s):


Title:

    Plenoptic modeling: an image-based rendering system

Presenter(s)/Author(s):



Abstract:


    Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convincing animations without an explicit geometric representation. We use the “plenoptic function” of Adelson and Bergen to provide a concise problem statement for image-based rendering paradigms, such as morphing and view interpolation. The plenoptic function is a parameterized function for describing everything that is visible from a given point in space. We present an image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equivalent to the epipolar constraint defined for planar projections.

References:


    1. Adelson, E. H., and J. R. Bergen, “The Plenoptic Function and the Elements of Early Vision,” Computational Models of Visual Proeessing, Chapter 1, Edited by Michael Landy and J. Anthony Movshon. The MIT Press, Cambridge, Mass. 1991.
    2. Anderson, D., “Hidden Line Elimination in Projected Grid Surfaces,” ACM Transactions on Graphics, October 1982.
    3. Barnard, S.T. “A Stochastic Approach to Stereo Vision,” SRI International, Technical Note 373, April 4, 1986.
    4. Beier, T. and S. Neely, “Feature-Based Image Metamorphosis,” Computer Graphics (SIGGRAPH’92 Proceedings), Vol. 26, No. 2, pp. 35-42, July 1992.
    5. Blinn, J. F. and M. E. Newell, “Texture and Reflection in Computer Generatedlmages,” Communications oftheACM, vol. 19, no. 10, pp. 542-547, October 1976.
    6. Bolles, R. C., H. H. Baker, and D. H. Marimont, “Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion,” International Journal of Computer Vision, Vol. 1, 1987.
    7. Catmull, E., “A Subdivision Algorithm for Computer Display of Curved Surfaces” (Ph. D. Thesis), Department of Computer Science, University of Utah, Tech. Report UTEC-CSc-74-133, December 1974.
    8. Chen, S. E. and L. Williams. “View Interpolation for Image Synthesis,” Computer Graphics (SIGGRAPH’93 Proceedings), pp. 279-288, July 1993.
    9. Faugeras, O., Three-dimensional Computer Vision: A Geometric Viewpoint, The MIT Press, Cambridge, Massachusetts, 1993.
    10. Greene, N., “Environment Mapping and Other Applications of World Projections,” IEEE Computer Graphics and Applications, November 1986.
    11. Hartley, R.I., “Self-Calibration from Multiple Views with a Rotating Camera,” Proceedings of the European Conference on Computer Vision, May 1994.
    12. Heckbert, P. S., “Fundamentals of Texture Mapping and Image Warping,” Masters Thesis, Dept. of EECS, UCB, Technical Report No. UCB/CSD 89/516, June 1989.
    13. Horn, B., and B.G. Schunck, “Determining Optical Flow,” Artificial Intelligence, Vol. 17, 1981.
    14. Kanatani, K., “Transformation of Optical Flow by Camera Rotation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 10, No. 2, March 1988.
    15. Laveau, S. and O. Faugeras, “3-D Scene Representation as a Collection of Images and Fundamental Matrices,” INRIA, Technical Report No. 2205, February, 1994.
    16. Lenz, R. K. and R. Y. Tsai, “Techniques for Calibration ofthe Scale Factor and Image Center for High Accuracy 3D Machine Vision Metrology,” Proceedings of IEEE International Conference on Robotics and Automation, March 31 – April 3, 1987.
    17. Lippman, A., “Movie-Maps: An Application of the Optical Videodisc to Computer Graphics,” SIGGRAPH ’80 Proceedings, 1980.
    18. Longuet-Higgins, H. C., “A Computer Algorithm for Reconstructing a Scene from Two Projections,” Nature, Vol. 293, September 1981.
    19. Longuet-Higgins, H. C., “The Reconstruction of a Scene From Two Projections – Configurations That Defeat the 8-Point Algorithm,” Proceedings of the First IEEE Conference on Artificial Intelligence Applications, Dec 1984.
    20. Lucas, B., and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proceedings of the Seventh International Joint Conference on Artificial Intelligence, Vancouver, 1981.
    21. McMillan, Leonard, “A List-Priority Rendering Algorithm for Redisplaying Projected Surfaces,” Department of Computer Science, UNC, Technical Report TR95-005, 1995.
    22. Mann, S. and R. W. Picard, “Virtual Bellows: Constructing High Quality Stills from Video,” Proceedings of the First IEEE International Conference on Image Processing, November 1994.
    23. Press, W. H., B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes in C, Cambridge University Press, Cambridge, Massachusetts, pp. 309-317, 1988.
    24. Regan, M., and R. Pose, “Priority Rendering with a Virtual Reality Address Recalculation Pipeline,” SIGGRAPH’94 Proceedings, 1994.
    25. Szeliski, R., “Image Mosaicing for Tele-Reality Applications,” DEC and Cambridge Research Lab Technical Report, CRL 94/ 2, May 1994.
    26. Tomasi, C., and T. Kanade, “Shape and Motion from Image Streams: a Factorization Method; Full Report on the Orthographic Case,” Technical Report, CMU-CS-92-104, Carnegie Mellon University, March 1992.
    27. Tsai, R. Y., “A Versatile Camera Calibration Technique for High- Accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV Cameras and Lenses,” IEEE Journal of Robotics and Automation, Vol. RA-3, No. 4, August 1987.
    28. Westover, L. A., “Footprint Evaluation for Volume Rendering,” SIGGRAPH’90 Proceedings, August 1990.
    29. Wolberg, G., Digital Image Warping, IEEE Computer Society Press, Los Alamitos, CA, 1990.


ACM Digital Library Publication:



Overview Page: