“Endless loops: detecting and animating periodic patterns in still images” by Halperin, Hakim, Vantzos, Hochman, Benaim, et al. …

  • ©Tavi Halperin, Hanit Hakim, Orestis Vantzos, Gershon Hochman, Netai Benaim, Lior Sassy, Michael Kupchik, Ofir Bibi, and Ohad Fried

Conference:


Type:


Title:

    Endless loops: detecting and animating periodic patterns in still images

Presenter(s)/Author(s):



Abstract:


    We present an algorithm for producing a seamless animated loop from a single image. The algorithm detects periodic structures, such as the windows of a building or the steps of a staircase, and generates a non-trivial displacement vector field that maps each segment of the structure onto a neighboring segment along a user- or auto-selected main direction of motion. This displacement field is used, together with suitable temporal and spatial smoothing, to warp the image and produce the frames of a continuous animation loop. Our cinemagraphs are created in under a second on a mobile device. Over 140,000 users downloaded our app and exported over 350,000 cinemagraphs. Moreover, we conducted two user studies that show that users prefer our method for creating surreal and structured cinemagraphs compared to more manual approaches and compared to previous methods.

References:


    1. Aseem Agarwala, Ke Colin Zheng, Chris Pal, Maneesh Agrawala, Michael Cohen, Brian Curless, David Salesin, and Richard Szeliski. 2005. Panoramic Video Textures. ACM Trans. Graph. 24, 3 (July 2005), 821–827. Google ScholarDigital Library
    2. Dror Aiger, Daniel Cohen-Or, and Niloy J Mitra. 2012. Repetition maximization based texture rectification. In Computer Graphics Forum, Vol. 31. Wiley Online Library, 439–448.Google Scholar
    3. Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, and Ravi Ramamoorthi. 2012. Selectively de-animating video. ACM Trans. Graph. 31, 4 (2012), 66–1.Google ScholarDigital Library
    4. Jiamin Bai, Aseem Agarwala, Maneesh Agrawala, and Ravi Ramamoorthi. 2013. Automatic cinemagraph portraits. In Computer Graphics Forum, Vol. 32. Wiley Online Library, 17–25.Google Scholar
    5. Yung-Yu Chuang, Dan B Goldman, Ke Colin Zheng, Brian Curless, David H Salesin, and Richard Szeliski. 2005. Animating pictures with stochastic motion textures. In ACM SIGGRAPH 2005 Papers. 853–860.Google ScholarDigital Library
    6. T. Dekel, S. Oron, S. Avidan, M. Rubinstein, and W.T. Freeman. 2015. Best Buddies Similarity for Robust Template Matching. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on. IEEE.Google Scholar
    7. Yuki Endo, Yoshihiro Kanamori, and Shigeru Kuriyama. 2019. Animating landscape: self-supervised learning of decoupled motion and appearance for single-image video synthesis. arXiv preprint arXiv:1910.07192 (2019).Google Scholar
    8. Tavi Halperin, Harel Cain, Ofir Bibi, and Michael Werman. 2019. Clear Skies Ahead: Towards Real-Time Automatic Sky Replacement in Video. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 207–218.Google Scholar
    9. James Hays, Marius Leordeanu, Alexei A Efros, and Yanxi Liu. 2006. Discovering texture regularity as a higher-order correspondence problem. In European Conference on Computer Vision. Springer, 522–535.Google ScholarDigital Library
    10. Kaiming He and Jian Sun. 2012. Statistics of patch offsets for image completion. In European Conference on Computer Vision. Springer, 16–29.Google ScholarCross Ref
    11. Aleksander Holynski, Brian Curless, Steven M Seitz, and Richard Szeliski. 2020. Animating Pictures with Eulerian Motion Fields. arXiv preprint arXiv:2011.15128 (2020).Google Scholar
    12. Alexander Hornung, Ellen Dekkers, and Leif Kobbelt. 2007. Character animation from 2d pictures and 3d motion data. ACM Transactions on Graphics (ToG) 26, 1 (2007), 1–es.Google ScholarDigital Library
    13. Neel Joshi, Sisil Mehta, Steven Drucker, Eric Stollnitz, Hugues Hoppe, Matt Uyttendaele, and Michael Cohen. 2012. Cliplets: juxtaposing still and dynamic imagery. In Proceedings of the 25th annual ACM symposium on User interface software and technology. 251–260.Google ScholarDigital Library
    14. Nikos Komodakis and Georgios Tziritas. 2007. Image completion using efficient belief propagation via priority scheduling and dynamic pruning. IEEE Transactions on Image Processing 16, 11 (2007), 2649–2661.Google ScholarDigital Library
    15. Johannes Kopf, Kevin Matzen, Suhib Alsisan, Ocean Quigley, Francis Ge, Yangming Chong, Josh Patterson, Jan-Michael Frahm, Shu Wu, Matthew Yu, et al. 2020. One shot 3D photography. ACM Transactions on Graphics (TOG) 39, 4 (2020), 76–1.Google ScholarDigital Library
    16. Philipp Krähenbühl and Vladlen Koltun. 2013. Parameter learning and convergent inference for dense random fields. In International Conference on Machine Learning. 513–521.Google Scholar
    17. Seungkyu Lee and Yanxi Liu. 2011. Curved glide-reflection symmetry detection. IEEE transactions on pattern analysis and machine intelligence 34, 2 (2011), 266–278.Google Scholar
    18. Jing Liao, Mark Finch, and Hugues Hoppe. 2015. Fast computation of seamless video loops. ACM Transactions on Graphics (TOG) 34, 6 (2015), 1–10.Google ScholarDigital Library
    19. Zicheng Liao, Neel Joshi, and Hugues Hoppe. 2013. Automated video looping with progressive dynamism. ACM Transactions on Graphics (TOG) 32, 4 (2013), 1–10.Google ScholarDigital Library
    20. Jiangyu Liu, Jian Sun, and Heung-Yeung Shum. 2009. Paint Selection. In ACM SIGGRAPH 2009 Papers (SIGGRAPH ’09). Association for Computing Machinery, New York, NY, USA, Article 69, 7 pages. Google ScholarDigital Library
    21. Liang Liu, Jiangning Zhang, Ruifei He, Yong Liu, Yabiao Wang, Ying Tai, Donghao Luo, Chengjie Wang, Jilin Li, and Feiyue Huang. 2020. Learning by Analogy: Reliable Supervision from Transformations for Unsupervised Optical Flow Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 6489–6498.Google ScholarCross Ref
    22. Siying Liu, Tian-Tsong Ng, Kalyan Sunkavalli, Minh N Do, Eli Shechtman, and Nathan Carr. 2015. PatchMatch-based automatic lattice detection for near-regular textures. In Proceedings of the IEEE International Conference on Computer Vision. 181–189.Google ScholarDigital Library
    23. Yanxi Liu, Wen-Chieh Lin, and James Hays. 2004. Near-regular texture analysis and manipulation. In ACM Transactions on Graphics (TOG), Vol. 23. ACM, 368–376.Google ScholarDigital Library
    24. David G Lowe. 2004. Distinctive image features from scale-invariant keypoints. International journal of computer vision 60, 2 (2004), 91–110.Google ScholarDigital Library
    25. Michal Lukáč, Daniel Sỳkora, Kalyan Sunkavalli, Eli Shechtman, Ondřej Jamriška, Nathan Carr, and Tomáš Pajdla. 2017. Nautilus: recovering regional symmetry transformations for image editing. ACM Transactions on Graphics (TOG) 36, 4 (2017), 108.Google ScholarDigital Library
    26. William H. Mumler. 1872. Mary Todd Lincoln with Abraham Lincoln’s “spirit”. http://contentdm.acpl.lib.in.us/digital/collection/p15155coll1/id/56.Google Scholar
    27. Simon Niklaus and Feng Liu. 2020. Softmax Splatting for Video Frame Interpolation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5437–5446.Google ScholarCross Ref
    28. Simon Niklaus, Long Mai, Jimei Yang, and Feng Liu. 2019. 3D Ken Burns effect from a single image. ACM Transactions on Graphics (TOG) 38, 6 (2019), 1–15.Google ScholarDigital Library
    29. Makoto Okabe, Ken Anjyo, Takeo Igarashi, and Hans-Peter Seidel. 2009. Animating pictures of fluid using video examples. In Computer Graphics Forum, Vol. 28. Wiley Online Library, 677–686.Google Scholar
    30. Makoto Okabe, Yoshinori Dobashi, and Ken Anjyo. 2018. Animating pictures of water scenes using video retrieval. The Visual Computer 34, 3 (2018), 347–358.Google ScholarDigital Library
    31. Minwoo Park, Kyle Brocklehurst, Robert T Collins, and Yanxi Liu. 2009. Deformed lattice detection in real-world images using mean-shift belief propagation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 10 (2009), 1804–1816.Google ScholarDigital Library
    32. Patrick Pérez, Michel Gangnet, and Andrew Blake. 2003. Poisson image editing. In ACM SIGGRAPH 2003 Papers. 313–318.Google ScholarDigital Library
    33. Yael Pritch, Eitam Kav-Venaki, and Shmuel Peleg. 2009. Shift-map image editing. In 2009 IEEE 12th International Conference on Computer Vision. IEEE, 151–158.Google Scholar
    34. James Pritts, Ondrej Chum, and Jiri Matas. 2014. Detection, rectification and segmentation of coplanar repeated patterns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2973–2980.Google Scholar
    35. Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli. 2019. SinGAN: Learning a Generative Model from a Single Natural Image. arXiv (2019), arXiv-1905.Google Scholar
    36. Arno Schödl, Richard Szeliski, David H Salesin, and Irfan Essa. 2000. Video textures. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques. 489–498.Google ScholarDigital Library
    37. Jianbo Shi et al. 1994. Good features to track. In 1994 Proceedings of IEEE conference on computer vision and pattern recognition. IEEE, 593–600.Google Scholar
    38. Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. 2020. 3D Photography using Context-aware Layered Depth Inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8028–8038.Google ScholarCross Ref
    39. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).Google Scholar
    40. Alexandru Telea. 2004. An image inpainting technique based on the fast marching method. Journal of graphics tools 9, 1 (2004), 23–34.Google ScholarCross Ref
    41. Matthew Tesfaldet, Marcus A Brubaker, and Konstantinos G Derpanis. 2018. Two-stream convolutional networks for dynamic texture synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition. 6703–6712.Google ScholarCross Ref
    42. James Tompkin, Fabrizio Pece, Kartic Subr, and Jan Kautz. 2011. Towards Moment Imagery: Automatic Cinemagraphs. In Proceedings of the 2011 Conference for Visual Media Production (CVMP ’11). IEEE Computer Society, USA, 87–93. Google ScholarDigital Library
    43. Unknown. 1930a. Dirigible Docked on Empire State Building, New York. https://www.metmuseum.org/art/collection/search/294832.Google Scholar
    44. Unknown. 1930b. Man on Rooftop with Eleven Men in Formation on His Shoulders. https://www.metmuseum.org/art/collection/search/294776.Google Scholar
    45. Chung-Yi Weng, Brian Curless, and Ira Kemelmacher-Shlizerman. 2019. Photo wake-up: 3d character animation from a single photo. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5908–5917.Google ScholarCross Ref
    46. Xuemiao Xu, Liang Wan, Xiaopei Liu, Tien-Tsin Wong, Liansheng Wang, and Chi-Sing Leung. 2008. Animating animal motion from still. In ACM SIGGRAPH Asia 2008 papers. 1–8.Google Scholar
    47. Mei-Chen Yeh and Po-Yi Li. 2012. A Tool for Automatic Cinemagraphs. In Proceedings of the 20th ACM International Conference on Multimedia (MM ’12). Association for Computing Machinery, New York, NY, USA, 1259–1260. Google ScholarDigital Library
    48. Zhengdong Zhang, Arvind Ganesh, Xiao Liang, and Yi Ma. 2012. TILT: Transform invariant low-rank textures. International journal of computer vision 99, 1 (2012), 1–24.Google Scholar


ACM Digital Library Publication:



Overview Page: