“Break Ames room illusion: depth from general single images” by Shi, Tao, Xu and Jia – ACM SIGGRAPH HISTORY ARCHIVES

“Break Ames room illusion: depth from general single images” by Shi, Tao, Xu and Jia

  • 2015 SA Technical Papers_Shi_Break Ames Room Illusion-Depth from General Single Images

Conference:


Type(s):


Title:

    Break Ames room illusion: depth from general single images

Session/Category Title:   Single Images


Presenter(s)/Author(s):



Abstract:


    Photos compress 3D visual data to 2D. However, it is still possible to infer depth information even without sophisticated object learning. We propose a solution based on small-scale defocus blur inherent in optical lens and tackle the estimation problem by proposing a non-parametric matching scheme for natural images. It incorporates a matching prior with our newly constructed edgelet dataset using a non-local scheme, and includes semantic depth order cues for physically based inference. Several applications are enabled on natural images, including geometry based rendering and editing.

References:


    1. Afonso, M. V., Bioucas-Dias, J. M., and Figueiredo, M. A. 2010. Fast image recovery using variable splitting and constrained optimization. TIP 19, 9, 2345–2356.
    2. Bae, S., and Durand, F. 2007. Defocus magnification. Computer Graphics Forum 26, 3, 571–579.
    3. Chakrabarti, A., Zickler, T., and Freeman, W. T. 2010. Analyzing spatially-varying blur. In CVPR, 2512–2519.
    4. Chen, Q., and Koltun, V. 2013. A simple model for intrinsic image decomposition with depth cues. In ICCV, 241–248.
    5. Cossairt, O., Zhou, C., and Nayar, S. 2010. Diffusion coded photography for extended depth of field. TOG 29, 4, 31.
    6. Eigen, D., Puhrsch, C., and Fergus, R. 2014. Depth map prediction from a single image using a multi-scale deep network. In NIPS, 2366–2374.
    7. Elder, J. H., and Zucker, S. W. 1998. Local scale control for edge detection and blur estimation. TPAMI 20, 7, 699–716.
    8. Hoiem, D., Stein, A. N., Efros, A. A., and Hebert, M. 2007. Recovering occlusion boundaries from a single image. In ICCV, 1–8.
    9. Jia, J., Sun, J., Tang, C.-K., and Shum, H.-Y. 2006. Drag-and-drop pasting. TOG 25, 3, 631–637.
    10. Karsch, K., Liu, C., and Kang, S. B. 2012. Depth extraction from video using non-parametric sampling. In ECCV, 775–788.
    11. Khoshelham, K., and Elberink, S. O. 2012. Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 12, 2, 1437–1454.
    12. Ladicky, L., Shi, J., and Pollefeys, M. 2014. Pulling things out of perspective. In CVPR, 89–96.
    13. Levin, A., Fergus, R., Durand, F., and Freeman, W. T. 2007. Image and depth from a conventional camera with a coded aperture. TOG 26, 3, 70.
    14. Liang, C.-K., Lin, T.-H., Wong, B.-Y., Liu, C., and Chen, H. H. 2008. Programmable aperture photography: Multiplexed light field acquisition. TOG 27, 3, 55.
    15. Maini, R., and Sohal, J. 2006. Performance evaluation of prewitt edge detector for noisy images. GVIP Journal 6, 3, 39–46.
    16. Muja, M., and Lowe, D. G. 2009. Fast approximate nearest neighbors with automatic algorithm configuration. In International Conference on Computer Vision Theory and Application, 331–340.
    17. Rhemann, C., Hosni, A., Bleyer, M., Rother, C., and Gelautz, M. 2011. Fast cost-volume filtering for visual correspondence and beyond. In CVPR, IEEE, 3017–3024.
    18. Rother, C., Kolmogorov, V., and Blake, A. 2004. Grabcut: Interactive foreground extraction using iterated graph cuts. TOG 23, 3, 309–314.
    19. Saxena, A., Chung, S. H., and Ng, A. Y. 2005. Learning depth from single monocular images. In NIPS, 1–8.
    20. Saxena, A., Sun, M., and Ng, A. 2009. Make3d: Learning 3d scene structure from a single still image. TPAMI 31, 5, 824–840.
    21. Scharstein, D., and Szeliski, R. 2002. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV 47, 1–3, 7–42.
    22. Schechner, Y. Y., and Kiryati, N. 2000. Depth from defocus vs. stereo: How different really are they? IJCV 39, 2, 141–162.
    23. Shi, J., Xu, L., and Jia, J. 2015. Just noticeable defocus blur detection and estimation. In CVPR, 1–8.
    24. Su, H., Huang, Q., Mitra, N. J., Li, Y., and Guibas, L. 2014. Estimating image depth using shape collections. TOG 33, 4, 37.
    25. Subbarao, M., and Surya, G. 1994. Depth from defocus: a spatial domain approach. IJCV 13, 3, 271–294.
    26. Tai, Y.-W., and Brown, M. S. 2009. Single image defocus map estimation using local contrast prior. In ICIP, 1797–1800.
    27. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., and Tumblin, J. 2007. Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. TOG 26, 3, 69.
    28. Watanabe, M., and Nayar, S. K. 1998. Rational filters for passive depth from defocus. IJCV 27, 3, 203–225.
    29. Wu, T.-P., Sun, J., Tang, C.-K., and Shum, H.-Y. 2008. Interactive normal reconstruction from a single image. TOG 27, 5, 119.
    30. Xu, L., Yan, Q., and Jia, J. 2013. A sparse control model for image and video editing. TOG 32, 6, 197.
    31. Zhou, C., and Nayar, S. 2009. What are good apertures for defocus deblurring? In ICCP, 1–8.
    32. Zhou, C., Lin, S., and Nayar, S. 2009. Coded aperture pairs for depth from defocus. In ICCV, 325–332.
    33. Zhu, X., Cohen, S., Schiller, S., and Milanfar, P. 2013. Estimating spatially varying defocus blur from a single image. TIP 22, 12, 4879–4891.
    34. Zhuo, S., and Sim, T. 2011. Defocus map estimation from a single image. Pattern Recognition 44, 9, 1852–1858.
    35. Ziou, D., and Deschênes, F. 2001. Depth from defocus estimation in spatial domain. CVIU 81, 2, 143–165.


ACM Digital Library Publication:



Overview Page:



Submit a story:

If you would like to submit a story about this presentation, please contact us: historyarchives@siggraph.org