“Ambient point clouds for view interpolation” by Goesele, Ackermann, Fuhrmann, Haubold, Klowsky, et al. …

  • ©

Conference:


Type(s):


Title:

    Ambient point clouds for view interpolation

Presenter(s)/Author(s):



Abstract:


    View interpolation and image-based rendering algorithms often produce visual artifacts in regions where the 3D scene geometry is erroneous, uncertain, or incomplete. We introduce ambient point clouds constructed from colored pixels with uncertain depth, which help reduce these artifacts while providing non-photorealistic background coloring and emphasizing reconstructed 3D geometry. Ambient point clouds are created by randomly sampling colored points along the viewing rays associated with uncertain pixels. Our real-time rendering system combines these with more traditional rigid 3D point clouds and colored surface meshes obtained using multiview stereo. Our resulting system can handle larger-range view transitions with fewer visible artifacts than previous approaches.

References:


    1. Buehler, C., Bosse, M., McMillan, L., Gortler, S. J., and Cohen, M. F. 2001. Unstructured lumigraph rendering. Proc. SIGGRAPH, 425–432. Google ScholarDigital Library
    2. Chen, S. E., and Williams, L. 1993. View interpolation for image synthesis. In Proc. SIGGRAPH, 279–288. Google ScholarDigital Library
    3. Evers-Senne, J.-F., and Koch, R. 2003. Image based interactive rendering with view dependent geometry. In Proc. EG, 573–582.Google Scholar
    4. Fitzgibbon, A. W., Wexler, Y., and Zisserman, A. 2005. Image-based rendering using image-based priors. IJCV 63, 2, 141–151. Google ScholarDigital Library
    5. Furukawa, Y., Curless, B., Seitz, S. M., and Szeliski, R. 2009. Reconstructing building interiors from images. In Proc. ICCV.Google Scholar
    6. Goesele, M., Snavely, N., Curless, B., Hoppe, H., and Seitz, S. M. 2007. Multi-view stereo for community photo collections. In Proc. ICCV.Google Scholar
    7. Gortler, S. J., Grzeszczuk, R., Szeliski, R., and Cohen, M. F. 1996. The lumigraph. In Proc. SIGGRAPH, 43–54. Google ScholarDigital Library
    8. Hammer, P. L., Hansen, P., and Simeone, B. 1984. Roof duality, complementation and persistency in quadratic 0–1 optimization. Mathematical Programming 28, 121–155.Google ScholarCross Ref
    9. Heigl, B., Koch, R., Pollefeys, M., Denzler, J., and Van Gool, L. J. 1999. Plenoptic modeling and rendering from image sequences taken by hand-held camera. In Proc. DAGM, 94–101. Google ScholarDigital Library
    10. Hofsetz, C., Ng, K., Chen, G., McGuinness, P., Max, N., and Liu, Y. 2004. Image-based rendering of range data with estimated depth uncertainty. CG&A 24, 4, 34–41. Google ScholarDigital Library
    11. Hofsetz, C., Chen, G., Max, N., Ng, K. C., Liu, Y., Hong, L., and McGuinness, P. 2004. Light-field rendering using colored point clouds—a dual-space approach. Presence: Teleoperators & Virtual Environments 13, 6, 726–741. Google ScholarDigital Library
    12. Hornung, A., and Kobbelt, L. 2009. Interactive pixelaccurate free viewpoint rendering from images with silhouette aware sampling. Computer Graphics Forum 28, 8, 2090–2103.Google ScholarCross Ref
    13. Levoy, M., and Hanrahan, P. 1996. Light field rendering. In Proc. SIGGRAPH, 31–42. Google ScholarDigital Library
    14. Lhuillier, M., and Quan, L. 2003. Image-based rendering by joint view triangulation. TCSVT 13, 11, 1051–1063. Google ScholarDigital Library
    15. McMillan, L., and Bishop, G. 1995. Plenoptic modeling: an image-based rendering system. In Proc. SIGGRAPH, 39–46. Google ScholarDigital Library
    16. Narayanan, P., Rander, P., and Kanade, T. 1998. Constructing virtual worlds using dense stereo. In Proc. ICCV, 3–10. Google ScholarDigital Library
    17. Ng, K. C., Trivedi, M. M., and Ishiguro, H. 2002. Generalized multiple baseline stereo and direct virtual view synthesis using range-space search, match, and render. IJCV 47, 1–3, 131–147. Google ScholarDigital Library
    18. Pulli, K., Cohen, M., Duchamp, T., Hoppe, H., Shapiro, L., and Stuetzle, W. 1997. View-based rendering: Visualizing real objects from scanned range and color data. In Proc. EGWR, 23–34. Google ScholarDigital Library
    19. Seitz, S. M., and Dyer, C. R. 1996. View morphing. In Proc. SIGGRAPH, 21–30. Google ScholarDigital Library
    20. Shade, J., Gortler, S., He, L.-W., and Szeliski, R. 1998. Layered depth images. In Proc. SIGGRAPH, 231–242. Google ScholarDigital Library
    21. Shahrokni, A., Mei, C., Torr, P. H. S., and Reid, I. D. 2008. From visual query to visual portrayal. In Proc. BMVC.Google Scholar
    22. Sinha, S. N., Steedly, D., and Szeliski, R. 2009. Piecewise planar stereo for image-based rendering. In Proc. ICCV.Google Scholar
    23. Snavely, N., Seitz, S. M., and Szeliski, R. 2006. Photo tourism: Exploring photo collections in 3D. ACM TOG 25, 3, 835–846. Google ScholarDigital Library
    24. Snavely, N., Garg, R., Seitz, S. M., and Szeliski, R. 2008. Finding paths through the world’s photos. ACM TOG 27, 3, 11–21. Google ScholarDigital Library
    25. Snavely, N., Seitz, S. M., and Szeliski, R. 2008. Skeletal graphs for efficient structure from motion. In Proc. CVPR.Google Scholar
    26. Szeliski, R. 2006. Locally adapted hierarchical basis preconditioning. ACM TOG 25, 3, 1135–1143. Google ScholarDigital Library
    27. Xu, H., and Chen, B. 2004. Stylized rendering of 3D scanned real world environments. In Proc. NPAR, 25–34. Google ScholarDigital Library
    28. Xu, H., Gossett, N., and Chen, B. 2004. Pointworks: Abstraction and rendering of sparsely scanned outdoor environments. In Proc. EGSR, 45–52. Google ScholarDigital Library
    29. Zitnick, C. L., Kang, S. B., Uyttendaele, M., Winder, S., and Szeliski, R. 2004. High-quality video view interpolation using a layered representation. ACM TOG 23, 3, 600–608. Google ScholarDigital Library
    30. Zomet, A., Feldman, D., Peleg, S., and Weinshall, D. 2003. Mosaicing new views: The crossed-slits projection. TPAMI, 741–754. Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: