“Non-rigid dense correspondence with applications for image enhancement” by HaCohen, Shechtman, Goldman and Lischinski

  • ©Yoav HaCohen, Eli Shechtman, Daniel (Dan) B. Goldman, and Daniel (Dani) Lischinski

Conference:


Type:


Title:

    Non-rigid dense correspondence with applications for image enhancement

Presenter(s)/Author(s):



Abstract:


    This paper presents a new efficient method for recovering reliable local sets of dense correspondences between two images with some shared content. Our method is designed for pairs of images depicting similar regions acquired by different cameras and lenses, under non-rigid transformations, under different lighting, and over different backgrounds. We utilize a new coarse-to-fine scheme in which nearest-neighbor field computations using Generalized PatchMatch [Barnes et al. 2010] are interleaved with fitting a global non-linear parametric color model and aggregating consistent matching regions using locally adaptive constraints. Compared to previous correspondence approaches, our method combines the best of two worlds: It is dense, like optical flow and stereo reconstruction methods, and it is also robust to geometric and photometric variations, like sparse feature matching. We demonstrate the usefulness of our method using three applications for automatic example-based photograph enhancement: adjusting the tonal characteristics of a source image to match a reference, transferring a known mask to a new image, and kernel estimation for image deblurring.

References:


    1. An, X., and Pellacini, F. 2010. User-controllable color transfer. Computer Graphics Forum 29, 2, 263–271.Google ScholarCross Ref
    2. Ancuti, C., Ancuti, C. O., and Bekaert, P. 2008. Deblurring by matching. Computer Graphics Forum 28, 2, 619–628.Google ScholarCross Ref
    3. Bai, X., Wang, J., Simons, D., and Sapiro, G. 2009. Video SnapCut: robust video object cutout using localized classifiers. ACM Trans. Graph. 28, 3 (July), 70:1–70:11. Google ScholarDigital Library
    4. Barnes, C., Shechtman, E., Goldman, D. B., and Finkelstein, A. 2010. The generalized PatchMatch correspondence algorithm. In Proc. ECCV, vol. 3, 29–43. Google ScholarDigital Library
    5. Bhat, P., Zitnick, C. L., Snavely, N., Agarwala, A., Agrawala, M., Curless, B., Cohen, M., and Kang, S. B. 2007. Using photographs to enhance videos of a static scene. In Rendering Techniques 2007, Eurographics, 327–338. Google Scholar
    6. Brox, T., Bregler, C., and Malik, J. 2009. Large displacement optical flow. In Proc. CVPR 2009, IEEE, 41–48.Google Scholar
    7. Cho, S., and Lee, S. 2009. Fast motion deblurring. ACM Trans. Graph. 28, 5 (December), 145:1–145:8. Google ScholarDigital Library
    8. Cho, M., Shin, Y. M., and Lee, K. M. 2008. Co-recognition of image pairs by data-driven monte carlo image exploration. In Proc. ECCV 2008, vol. 4, 144–157. Google Scholar
    9. Cho, M., Lee, J., and Lee, K. 2009. Feature correspondence and deformable object matching via agglomerative correspondence clustering. In Proc. ICCV, 1280–1287.Google Scholar
    10. Dale, K., Johnson, M. K., Sunkavalli, K., Matusik, W., and Pfister, H. 2009. Image restoration using online photo collections. In Proc. ICCV, IEEE.Google Scholar
    11. Eisemann, E., and Durand, F. 2004. Flash photography enhancement via intrinsic relighting. ACM Trans. Graph. 23 (August), 673–678. Google ScholarDigital Library
    12. Eisemann, M., Eisemann, E., Seidel, H.-P., and Magnor, M. 2010. Photo zoom: High resolution from unordered image collections. In Proc. Graphics Interface, 71–78. Google ScholarDigital Library
    13. Ferrari, V., Tuytelaars, T., and Gool, L. J. V. 2004. Simultaneous object recognition and segmentation by image exploration. In Proc. ECCV, vol. 1, 40–54.Google Scholar
    14. Joshi, N., Matusik, W., Adelson, E. H., and Kriegman, D. J. 2010. Personal photo enhancement using example images. ACM Trans. Graph. 29, 2 (April), 12:1–12:15. Google ScholarDigital Library
    15. Kagarlitsky, S., Moses, Y., and Hel Or, Y. 2009. Piecewise-consistent color mappings of images acquired under various conditions. In Proc. ICCV, 2311–2318.Google Scholar
    16. Levin, A., Fergus, R., Durand, F., and Freeman, W. T. 2007. Image and depth from a conventional camera with a coded aperture. ACM Trans. Graph. 26, 3 (July). Google ScholarDigital Library
    17. Levin, A., Weiss, Y., Durand, F., and Freeman., W. T. 2011. Efficient marginal likelihood optimization in blind deconvolution. In Proc. CVPR, IEEE. Google ScholarDigital Library
    18. Liu, C., Yuen, J., Torralba, A., Sivic, J., and Freeman, W. T. 2008. SIFT flow: Dense correspondence across different scenes. In Proc. ECCV, vol. 3, 28–42. Google Scholar
    19. Liu, X., Wan, L., Qu, Y., Wong, T.-T., Lin, S., Leung, C.-S., and Heng, P.-A. 2008. Intrinsic colorization. ACM Trans. Graph. 27, 5 (December), 152:1–152:9. Google ScholarDigital Library
    20. Lowe, D. G. 2004. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60, 2, 91–110. Google ScholarDigital Library
    21. Lucas, B. D., and Kanade, T. 1981. An iterative image registration technique with an application to stereo vision. In Proc. DARPA Image Understanding Workshop, 121–130. Google ScholarDigital Library
    22. Matas, J., Chum, O., Urba, M., and Pajdla, T. 2002. Robust wide baseline stereo from maximally stable extremal regions. In Proc. BMVC, 384–396.Google Scholar
    23. Mikolajczyk, K., Tuytelaars, T., Schmid, C., Zisserman, A., Matas, J., Schaffalitzky, F., Kadir, T., and Gool, L. V. 2005. A comparison of affine region detectors. Int. J. Comput. Vision 65 (November), 43–72. Google ScholarDigital Library
    24. Pérez, P., Gangnet, M., and Blake, A. 2003. Poisson image editing. ACM Trans. Graph. 22, 3 (July), 313–318. Google ScholarDigital Library
    25. Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., and Toyama, K. 2004. Digital photography with flash and no-flash image pairs. ACM Trans. Graph. 23, 3 (August), 664–672. Google ScholarDigital Library
    26. Pitié, F., Kokaram, A. C., and Dahyot, R. 2007. Automated colour grading using colour distribution transfer. Comput. Vis. Image Underst. 107 (July), 123–137. Google ScholarDigital Library
    27. Pizer, S. M., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., Romeny, B. T. H., and Zimmerman, J. B. 1987. Adaptive histogram equalization and its variations. Comput. Vision Graph. Image Process. 39 (September), 355–368. Google ScholarDigital Library
    28. Reinhard, E., Ashikhmin, M., Gooch, B., and Shirley, P. 2001. Color transfer between images. IEEE Comput. Graph. Appl. (September 2001). Google Scholar
    29. Rother, C., Kolmogorov, V., and Blake, A. 2004. “Grab-Cut”: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23, 3 (August), 309–314. Google ScholarDigital Library
    30. Rother, C., Minka, T. P., Blake, A., and Kolmogorov, V. 2006. Cosegmentation of image pairs by histogram matching — incorporating a global constraint into MRFs. In Proc. CVPR 2006, vol. 1, 993–1000. Google Scholar
    31. Snavely, N., Seitz, S. M., and Szeliski, R. 2006. Photo tourism: exploring photo collections in 3D. ACM Trans. Graph. 25 (July), 835–846. Google ScholarDigital Library
    32. Yuan, L., Sun, J., Quan, L., and Shum, H.-Y. 2007. Image deblurring with blurred/noisy image pairs. ACM Trans. Graph. 26, 3 (July). Google ScholarDigital Library
    33. Zelnik-Manor, L., and Irani, M. 2006. On single-sequence and multi-sequence factorizations. Int. J. Comput. Vision 67 (May), 313–326. Google ScholarDigital Library


ACM Digital Library Publication: