“Sky is not the limit: semantic-aware sky replacement”

  • ©Yi-Hsuan Tsai, Xiaohui Shen, Zhe Lin, Kalyan Sunkavalli, and Ming-Hsuan Yang

Conference:


Type(s):


Title:

    Sky is not the limit: semantic-aware sky replacement

Session/Category Title:   PHOTO ORGANIZATION & MANIPULATION


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    Skies are common backgrounds in photos but are often less interesting due to the time of photographing. Professional photographers correct this by using sophisticated tools with painstaking efforts that are beyond the command of ordinary users. In this work, we propose an automatic background replacement algorithm that can generate realistic, artifact-free images with a diverse styles of skies. The key idea of our algorithm is to utilize visual semantics to guide the entire process including sky segmentation, search and replacement. First we train a deep convolutional neural network for semantic scene parsing, which is used as visual prior to segment sky regions in a coarse-to-fine manner. Second, in order to find proper skies for replacement, we propose a data-driven sky search scheme based on semantic layout of the input image. Finally, to re-compose the stylized sky with the original foreground naturally, an appearance transfer method is developed to match statistics locally and semantically. We show that the proposed algorithm can automatically generate a set of visually pleasing results. In addition, we demonstrate the effectiveness of the proposed algorithm with extensive user studies.

References:


    1. Bitouk, D., Kumar, N., Dhillon, S., Belhumeur, P., and Nayar, S. K. 2008. Face swapping: Automatically replacing faces in photographs. ACM Trans. Graph. (proc. SIGGRAPH) 27, 3. Google ScholarDigital Library
    2. Boykov, Y., and Kolmogorov, V. 2004. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. PAMI, 1124–1137. Google ScholarDigital Library
    3. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A. L. 2015. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR.Google Scholar
    4. Dale, K., Johnson, M. K., Sunkavalli, K., Matusik, W., and Pfister, H. 2009. Image restoration using online photo collections. In ICCV.Google Scholar
    5. Darabi, S., Shechtman, E., Barnes, C., Goldman, D. B., and Sen, P. 2012. Image melding: Combining inconsistent images using patch-based synthesis. ACM Trans. Graph. (proc. SIGGRAPH) 31, 4. Google ScholarDigital Library
    6. Galasso, F., Nagaraja, N., Cardenas, T., Brox, T., and Schiele, B. 2013. A unified video segmentation benchmark: Annotation, metrics and analysis. In ICCV. Google ScholarDigital Library
    7. HaCohen, Y., Shechtman, E., Goldman, D. B., and Lischinski, D. 2011. Non-rigid dense correspondence with applications for image enhancement. ACM Trans. Graph. (proc. SIGGRAPH) 30, 4. Google ScholarDigital Library
    8. Hays, J., and Efros, A. A. 2007. Scene completion using millions of photographs. ACM Trans. Graph. (proc. SIGGRAPH) 26, 3. Google ScholarDigital Library
    9. He, K., Sun, J., and Tang, X. 2013. Guided image filtering. PAMI 35, 6, 1397–1409. Google ScholarDigital Library
    10. Hoiem, D., Efros, A. A., and Hebert, M. 2007. Recovering surface layout from an image. IJCV 75, 1. Google ScholarDigital Library
    11. Johnson, M. K., Dale, K., Avidan, S., Pfister, H., Freeman, W. T., and Matusik, W. 2011. Cg2real: Improving the realism of computer generated images using a large collection of photographs. IEEE Trans. Vis. Comp. Graph. 17, 9. Google ScholarDigital Library
    12. Kaufman, L., Lischinski, D., and Werman, M. 2012. Content-aware automatic photo enhancement. Comp. Graph. Forum 31, 8. Google ScholarDigital Library
    13. Kumar, M. P., Torr, P., and Zisserman, A. 2005. Obj cut. In CVPR. Google ScholarDigital Library
    14. Laffont, P.-Y., Ren, Z., Tao, X., Qian, C., and Hays, J. 2014. Transient attributes for high-level understanding and editing of outdoor scenes. ACM Trans. Graph. (proc. SIGGRAPH) 33, 4. Google ScholarDigital Library
    15. Lalonde, J.-F., and Efros, A. A. 2007. Using color compatibility for assessing image realism. In ICCV.Google Scholar
    16. Lalonde, J.-F., Hoiem, D., Efros, A. A., Rother, C., Winn, J., and Criminisi, A. 2007. Photo clip art. ACM Trans. Graph. (proc. SIGGRAPH) 26, 3. Google ScholarDigital Library
    17. Lalonde, J.-F., Narasimhan, S. G., and Efros, A. A. 2010. What do the sun and the sky tell us about the camera? IJCV 88, 1. Google ScholarDigital Library
    18. Lalonde, J.-F., Efros, A. A., and Narasimhan, S. G. 2011. Estimating the natural illumination conditions from a single outdoor image. IJCV 98, 2, 123–145. Google ScholarDigital Library
    19. Lazebnik, S., Schmid, C., and Ponce, J. 2006. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR. Google ScholarDigital Library
    20. Lee, J.-Y., Sunkavalli, K., Lin, Z., Shen, X., and Kweon, I. S. 2016. Automatic content-aware color and tone stylization. In CVPR.Google Scholar
    21. Liu, Y., and Yu, Y. 2012. Interactive image segmentation based on level sets of probabilities. IEEE Transactions on Visualization and Computer Graphics 18, 2, 202–213. Google ScholarDigital Library
    22. Liu, C., Yuen, J., and Torralba, A. 2011. Nonparametric scene parsing via label transfer. PAMI 33, 12, 2368–2382. Google ScholarDigital Library
    23. Liu, Y., Cohen, M., Uyttendaele, M., and Rusinkiewicz, S. 2014. Autostyle: Automatic style transfer from image collections to users image. Comp. Graph. Forum 33, 4.Google ScholarCross Ref
    24. Long, J., Shelhamer, E., and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In CVPR.Google Scholar
    25. Pérez, P., Gangnet, M., and Blake, A. 2003. Poisson image editing. ACM Trans. Graph. (proc. SIGGRAPH) 22, 3. Google ScholarDigital Library
    26. Pitié, F., and Kokaram, A. 2007. The linear mongekantorovitch linear colour mapping for example-based colour transfer. In CVMP.Google Scholar
    27. Reinhard, E., Ashikhmin, M., Gooch, B., and Shirley, P. 2001. Color transfer between images. IEEE Comp. Graph. Appl. 21, 5, 34–41. Google ScholarDigital Library
    28. Rother, C., Kolmogorov, V., and Blake, A. 2004. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. (proc. SIGGRAPH) 23, 3. Google ScholarDigital Library
    29. Shahrian, E., Rajan, D., Price, B., and Cohen, S. 2013. Improving image matting using comprehensive sampling sets. In CVPR. Google ScholarDigital Library
    30. Shih, Y., Paris, S., Durand, F., and Freeman, W. T. 2013. Data-driven hallucination of different times of day from a single outdoorphoto. ACM Trans. Graph. (proc. SIGGRAPH Asia) 32, 6. Google ScholarDigital Library
    31. Sunkavalli, K., Johnson, M. K., Matusik, W., and Pfister, H. 2010. Multi-scale image harmonization. ACM Trans. Graph. (proc. SIGGRAPH) 29, 4. Google ScholarDigital Library
    32. Tai, Y.-W., Jia, J., and Tang, C.-K. 2005. Local color transfer via probabilistic segmentation by expectation-maximization. In CVPR. Google ScholarDigital Library
    33. Tao, L., Yuan, L., and Sun, J. 2009. Skyfinder: Attribute-based sky image search. ACM Trans. Graph. (proc. SIGGRAPH) 28, 3. Google ScholarDigital Library
    34. Tao, M. W., Johnson, M. K., and Paris, S. 2013. Error-tolerant image compositing. IJCV 103, 2, 178–189. Google ScholarDigital Library
    35. Tighe, J., and Lazebnik, S. 2013. Superparsing: Scalable non-parametric image parsing with superpixels. IJCV 101, 2, 329–349. Google ScholarDigital Library
    36. Torralba, A., Oliva, A., Castelhano, M., and Hen-derso, J. M. 2006. Contextual guidance of attention in natural scenes: The role of global features on object search. Psychological Review 113, 10, 766–786.Google ScholarCross Ref
    37. Tsai, Y.-H., Hamsici, O., and Yang, M.-H. 2015. Adaptive region pooling for object detection. In CVPR.Google Scholar
    38. Wang, X., Yang, M., Zhu, S., and Lin, Y. 2013. Regionlets for generic object detection. In ICCV. Google ScholarDigital Library
    39. Wu, F., Dong, W., Knog, Y., Mei, X., Paul, J.-C., and Zhang, X. 2013. Content-based coulour transfer. Comp. Graph. Forum 32, 1.Google ScholarCross Ref
    40. Xue, S., Agarwala, A., Dorsey, J., and Rushmeier, H. 2012. Understanding and improving the realism of image composites. ACM Trans. Graph. (proc. SIGGRAPH) 31, 4. Google ScholarDigital Library
    41. Yan, Z., Zhang, H., Wang, B., Paris, S., and Yu, Y. 2016. Automatic photo adjustment using deep neural networks. ACM Trans. Graph. 35, 2. Google ScholarDigital Library
    42. Zheng, S., Jayasumana, S., Romera-Paredes, B., Vineet, V., Su, Z., Du, D., Huang, C., and Torr, P. 2015. Conditional random fields as recurrent neural networks. In ICCV. Google ScholarDigital Library
    43. Zhu, J.-Y., Krähenbühl, P., Shechtman, E., and Efros, A. A. 2015. Learning a discriminative model for the perception of realism in composite images. In ICCV. Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: