“Real-time hyperlapse creation via optimal frame selection” by Tan, Dvorožňák, Sýkora and Gingold

  • ©

Conference:


Type(s):


Title:

    Real-time hyperlapse creation via optimal frame selection

Session/Category Title:   Let’s Do the Time Warp


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    Long videos can be played much faster than real-time by recording only one frame per second or by dropping all but one frame each second, i.e., by creating a timelapse. Unstable hand-held moving videos can be stabilized with a number of recently described methods. Unfortunately, creating a stabilized timelapse, or hyperlapse, cannot be achieved through a simple combination of these two methods. Two hyperlapse methods have been previously demonstrated: one with high computational complexity and one requiring special sensors. We present an algorithm for creating hyperlapse videos that can handle significant high-frequency camera motion and runs in real-time on HD video. Our approach does not require sensor data, thus can be run on videos captured on any camera. We optimally select frames from the input video that best match a desired target speed-up while also resulting in the smoothest possible camera motion. We evaluate our approach using several input videos from a range of cameras and compare these results to existing methods.

References:


    1. Arev, I., Park, H. S., Sheikh, Y., Hodgins, J., and Shamir, A. 2014. Automatic editing of footage from multiple social cameras. ACM Trans. Graph. 33, 4 (July), 81:1–81:11. Google ScholarDigital Library
    2. Baker, S., Bennett, E., Kang, S. B., and Szeliski, R. 2010. Removing rolling shutter wobble. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, 2392–2399.Google Scholar
    3. Bennett, E. P., and McMillan, L. 2007. Computational time-lapse video. ACM Trans. Graph. 26, 3 (July). Google ScholarDigital Library
    4. Calonder, M., Lepetit, V., Strecha, C., and Fua, P. 2010. Brief: binary robust independent elementary features. In Proceedings of the 11th European Conference on Computer vision: Part IV, ECCV’10, 778–792. Google ScholarDigital Library
    5. Canon, L. G. 1993. EF LENS WORK III, The Eyes of EOS. Canon Inc.Google Scholar
    6. Fischler, M. A., and Bolles, R. C. 1981. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 6 (June), 381–395. Google ScholarDigital Library
    7. Forssen, P.-E., and Ringaby, E. 2010. Rectifying rolling shutter video from hand-held devices. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, 507–514.Google Scholar
    8. Grundmann, M., Kwatra, V., and Essa, I. 2011. Autodirected video stabilization with robust l1 optimal camera paths. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, 225–232. Google ScholarDigital Library
    9. Joshi, N., Kang, S. B., Zitnick, C. L., and Szeliski, R. 2010. Image deblurring using inertial measurement sensors. ACM Trans. Graph. 29, 4 (July), 30:1–30:9. Google ScholarDigital Library
    10. Joshi, N., Mehta, S., Drucker, S., Stollnitz, E., Hoppe, H., Uyttendaele, M., and Cohen, M. 2012. Cliplets: Juxtaposing still and dynamic imagery. In Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, ACM, New York, NY, USA, UIST ’12, 251–260. Google ScholarDigital Library
    11. Kaneva, B., Sivic, J., Torralba, A., Avidan, S., and Freeman, W. 2010. Infinite images: Creating and exploring a large photorealistic virtual space. Proceedings of the IEEE 98, 8 (Aug), 1391–1407.Google ScholarCross Ref
    12. Karpenko, A., Jacobs, D., Baek, J., and Levoy, M. 2011. Digital video stabilization and rolling shutter correction using gyroscopes. Stanford University Computer Science Tech Report CSTR 2011-03.Google Scholar
    13. Karpenko, A., 2014. The technology behind hyperlapse from instagram, Aug. http://instagram-engineering.tumblr.com/post/95922900787/hyperlapse.Google Scholar
    14. Kopf, J., Cohen, M. F., and Szeliski, R. 2014. First-person hyper-lapse videos. ACM Trans. Graph. 33, 4 (July), 78:1–78:10. Google ScholarDigital Library
    15. Levieux, P., Tompkin, J., and Kautz, J. 2012. Interactive viewpoint video textures. In Proceedings of the 9th European Conference on Visual Media Production, ACM, New York, NY, USA, CVMP ’12, 11–17. Google ScholarDigital Library
    16. Liu, F., Gleicher, M., Jin, H., and Agarwala, A. 2009. Content-preserving warps for 3d video stabilization. ACM Trans. Graph. 28, 3 (July), 44:1–44:9. Google ScholarDigital Library
    17. Liu, F., Gleicher, M., Wang, J., Jin, H., and Agarwala, A. 2011. Subspace video stabilization. ACM Trans. Graph. 30, 1 (Feb.), 4:1–4:10. Google ScholarDigital Library
    18. Liu, S., Yuan, L., Tan, P., and Sun, J. 2013. Bundled camera paths for video stabilization. ACM Trans. Graph. 32, 4 (July), 78:1–78:10. Google ScholarDigital Library
    19. Lowe, D. 1999. Object recognition from local scale-invariant features. In Computer Vision, 1999. The Proceedings of the Seventh IEEE International Conference on, vol. 2, 1150–1157 vol.2. Google ScholarDigital Library
    20. Matsushita, Y., Ofek, E., Ge, W., Tang, X., and Shum, H.-Y. 2006. Full-frame video stabilization with motion inpainting. Pattern Analysis and Machine Intelligence, IEEE Transactions on 28, 7 (July), 1150–1163. Google ScholarDigital Library
    21. Poleg, Y., Halperin, T., Arora, C., and Peleg, S. 2014. Egosampling: Fast-forward and stereo for egocentric videos. arXiv, arXiv:1412.3596 (November).Google Scholar
    22. Provost, D., 2014. How does the iOS 8 time-lapse feature work?, Sept. http://www.studioneat.com/blogs/main/15467765-how-does-the-ios-8-time-lapse-feature-work.Google Scholar
    23. Schödl, A., Szeliski, R., Salesin, D. H., and Essa, I. 2000. Video textures. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, SIGGRAPH ’00, 489–498. Google ScholarDigital Library
    24. Wang, O., Schroers, C., Zimmer, H., Gross, M., and Sorkine-Hornung, A. 2014. Videosnapping: Interactive synchronization of multiple videos. ACM Trans. Graph. 33, 4 (July), 77:1–77:10. Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: