“Plenoptic stitching: a scalable method for reconstructing 3D interactive walk throughs” by Aliaga and Carlbom
Conference:
Type(s):
Title:
- Plenoptic stitching: a scalable method for reconstructing 3D interactive walk throughs
Presenter(s)/Author(s):
Abstract:
Interactive walkthrough applications require detailed 3D models to give users a sense of immersion in an environment. Traditionally these models are built using computer-aided design tools to define geometry and material properties. But creating detailed models is time-consuming and it is also difficult to reproduce all geometric and photometric subtleties of real-world scenes. Computer vision attempts to alleviate this problem by extracting geometry and photogrammetry from images of the real-world scenes. However, these models are still limited in the amount of detail they recover. Image-based rendering generates novel views by resampling a set of images of the environment without relying upon an explicit geometric model. Current such techniques limit the size and shape of the environment, and they do not lend themselves to walkthrough applications. In this paper, we define a parameterization of the 4D plenoptic function that is particularly suitable for interactive walkthroughs and define a method for its sampling and reconstructing. Our main contributions are: 1) a parameterization of the 4D plenoptic function that supports walkthrough applications in large, arbitrarily shaped environments; 2) a simple and fast capture process for complex environments; and 3) an automatic algorithm for reconstruction of the plenoptic function.
References:
1. Adelson E.H. and Bergen J., In “The Plenoptic Function and the Elements of Early Vision”, In Computational Models of Visual Processing, MIT Press, Cambridge, MA, 3-20, 1991.
2. Aliaga D., “Accurate Catadioptric Calibration for Real-time Pose Estimation in Room-size Environments”, IEEE International Conference on Computer Vision (ICCV 01), July, 2001.
3. Chen S. E. and Williams L., “View Interpolation for Image Synthesis”, Computer Graphics (SIGGRAPH 93), 279-288, 1993.
4. Chen S. E, “QuickTime VR – An Image-Based Approach to Virtual Environment Navigation”, Computer Graphics (SIGGRAPH 95), 29-38, 1995.
5. Debevec, P.E., Taylor C.J., and Malik, J., “Modeling and Rendering Architecture from Photographs”, Computer Graphics (SIGGRAPH 96), 11-20, 1996.
6. Faugeras O.D, Three-Dimensional Computer Vision: A Geometric Viewpoint, MIT Press, Cambridge, MA, 1993.
7. Faugeras O.D., Laveau S., Robert L., Czurka G., and Zeller C., “3D Reconstruction of Urban Scenes from Sequences of Images”, Computer Vision and Image Understanding, Vol. 69(3), 292-309, 1998.
8. C. Geyer and K. Daniliidis, “Catadioptric Camera Calibration”, IEEE International Conference on Computer Vision (ICCV 98), pp. 398-404, 1998.
9. Gortler S., Grzeszczuk R., Szeliski R., and Cohen M., “The Lumigraph”, Computer Graphics (SIGGRAPH 96), 43-54, 1996.
10. , Gottschalk S., Lin M., and Manocha D., “OBBTree: A Hierarchical Structure for Rapid Interference Detection”, Computer Graphics (SIGGRAPH 96), 171-180 (1996).
11. Kang S.B. and Szeliski R., “3D Scene Data Recovery Using Omnidirectional Baseline Stereo”, IEEE Computer Vision and Pattern Recognition (CVPR 96), 364-370, 1996.
12. Levoy M. and Hanrahan P., “Light Field Rendering”, Computer Graphics (SIGGRAPH 96), 31-42, 1996.
13. Le Gall D., “MPEG: A Video Compression Standard for Multimedia Applications”, Communications of the ACM (CACM), Vol. 34(4), 46-58, 1991.
14. Max N. and Ohsaki K., “Rendering Trees from Precomputed Z- Buffer Views”, Rendering Techniques ’95: Proceedings of the 6th Eurographics Workshop on Rendering, 45-54, 1995.
15. McMillan L. and Bishop G., “Plenoptic Modeling: An Image-Based Rendering System”, Computer Graphics (SIGGRAPH 95), 39-46, 1995.
16. Nalwa V.S., A True Omnidirectional Viewer, Technical Report, Bell Laboratories, Holmdel, NJ, 1996.
17. S. Nayar, “Catadioptric Omnidirectional Camera”, IEEE Computer Vision and Pattern Recognition (CVPR 97), 482-488, 1997.
18. Rademacher P. and Bishop, G., “Multiple-Center-of- Projection Images”, Computer Graphics (SIGGRAPH 99), 199-206, 1999.
19. Roberts D.R. and Marshall A.D., “A Review of Viewpoint Planning”, Technical Report 97008, University of Wales, College of Cardiff, Department of Computer Science, 1997.
20. Shi J. and Tomasi C., “Good Features to Track”, IEEE Computer Vision and Pattern Recognition (CVPR 94), 593-600, 1994.
21. Shum H. and He L., “Rendering with Concentric Mosaics”, Computer Graphics (SIGGRAPH 99), 299-306, 1999.
22. Szeliski R., “Video mosaics for virtual environments”, IEEE Computer Graphics and Applications, 22-30, 1996.
23. Szeliski R. and Shum H., “Creating full view panoramic image mosaics and texture-mapped models”, Computer Graphics (SIGGRAPH 97), 251-258, 1997.
24. Takahashi T., Kawasaki H., Ikeuchi K., and Sakauchi M., “Arbitrary View Position and Direction Rendering for Large-Scale Scenes”, IEEE Computer Vision and Pattern Recognition (CVPR 00), 296-303, 2000.
25. Taylor C., “Video Plus”, IEEE Workshop on Omnidirectional Vision, 3-10, 2000.
26. Tomasi C. and Kanade T., “Detection and Tracking of Point Features”, Carnegie Mellon University Technical Report CMU-CS-91-132, 1991.
27. Wallace G., “The JPEG Still Picture Compression Standard”, Communications of the ACM (CACM), Vol. 34(4), 30-44, 1991.
28. Ward M., Azuma R., Bennett R., Gottschalk S., and Fuchs H., “A Demonstrated Optical Tracker with Scalable Work Area for Head- Mounted Display Systems.”, ACM Symposium on Interactive 3D Graphics (I3D 92), 43-52, 1992.
29. Yu Y. and Malik J., “Recovering photometric properties of architectural scenes from photographs”, Computer Graphics (SIGGRAPH 96), 207-218, 1996.
30. Ziv J. and Lempel A., “A universal algorithm for sequential data compression”, IEEE Transactions on Information Theory, IT-23, 337-343, 1977.