“Motion magnification” by Liu, Torralba, Freeman, Durand and Adelson

  • ©Ce Liu, Antonio Torralba, William T. Freeman, Frédo Durand, and Edward H. Adelson

Conference:


Type:


Title:

    Motion magnification

Presenter(s)/Author(s):



Abstract:


    We present motion magnification, a technique that acts like a microscope for visual motion. It can amplify subtle motions in a video sequence, allowing for visualization of deformations that would otherwise be invisible. To achieve motion magnification, we need to accurately measure visual motions, and group the pixels to be modified. After an initial image registration step, we measure motion by a robust analysis of feature point trajectories, and segment pixels based on similarity of position, color, and motion. A novel measure of motion similarity groups even very small motions according to correlation over time, which often relates to physical cause. An outlier mask marks observations not explained by our layered motion model, and those pixels are simply reproduced on the output from the original registered observations.The motion of any selected layer may be magnified by a user-specified amount; texture synthesis fills-in unseen “holes” revealed by the amplified motions. The resulting motion-magnified images can reveal or emphasize small motions in the original sequence, as we demonstrate with deformations in load-bearing structures, subtle motions or balancing corrections of people, and “rigid” structures bending under hand pressure.

References:


    1. Arikan, O., and Forsyth, D. A. 2002. Synthesizing constrained motions from examples. ACM Transactions on Graphics 21, 3 (July), 483–490. Google ScholarDigital Library
    2. Boykov, Y., Veksler, O., and Zabih, R. 2001. Fast approximate energy minimization via graph cuts. IEEE Pat. Anal. Mach. Intell. 23, 11, 1222–1239. Google ScholarDigital Library
    3. Brand, M., and Hertzmann, A. 2000. Style machines. In Proceedings of ACM SIGGRAPH 2000, 183–192. Google ScholarDigital Library
    4. Brostow, G., and Essa, I. 1999. Motion-based video decompositing. In IEEE International Conference on Computer Vision (ICCV ’99), 8–13.Google Scholar
    5. Brostow, G., and Essa, I. 2001. Image-based motion blur for stop motion animation. In Proceedings of ACM SIGGRAPH 2001, 561–566. Google ScholarDigital Library
    6. Dempster, A. P., Laird, N. M., and Rubin. D. B. 1977. Maximum likelihood from incomplete data via the EM algorithm. J. R. Statist. Soc. B 39, 1–38.Google ScholarCross Ref
    7. Efros, A. A., and Leung, T. K. 1999. Texture synthesis by non-parametric sampling. In IEEE International Conference on Computer Vision, 1033–1038. Google ScholarDigital Library
    8. Fischler, M., and Bolles, R. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24, 6, 381–395. Google ScholarDigital Library
    9. Gleicher, M. 1998. Retargetting motion to new characters. In Proceedings of ACM SIGGRAPH 98, 33–42. Google ScholarDigital Library
    10. Harris, C., and Stephens, M. 1988. A combined corner and edge detector. In Proceedings of 4th Alvey Vision Conference, 147–151.Google Scholar
    11. Jojic, N., and Frey, B. 2001. Learning flexible sprites in video layers. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR’01), 199–206.Google Scholar
    12. Lee, J., Chai, J., Reitsma, P. S. A., Hodgins, J. K., and Pollard, N. S. 2002. Interactive control of avatars animated with human motion data. ACM Transactions on Graphics 21, 3 (July), 491–500. Google ScholarDigital Library
    13. Li, Y., Wang, T., and Shum, H.-Y. 2002. Motion texture: A two-level statistical model for character motion synthesis. ACM Transactions on Graphics 21, 3 (July), 465–472. Google ScholarDigital Library
    14. Lucas, B., and Kanade, T. 1981. An iterative image registration technique with an application to stereo vision. In Image Understanding Workshop, 121–130.Google Scholar
    15. Nobel, A. 1989. Descriptions of Image Surfaces. PhD thesis, Oxford University, Oxford, UK.Google Scholar
    16. Pullen, K., and Bregler, C. 2002. Motion capture assisted animation: Texture and synthesis. ACM Transactions on Graphics 21 (July), 501–508. Google ScholarDigital Library
    17. Rother, C., Kolmogorov, V., and Blake, A. 2004. Interactive foreground extraction using iterated graph cuts. In Proceedings of ACM SIGGRAPH 2004, 309–314. Google ScholarDigital Library
    18. Ruzon, M., and Tomasi. C. 2000. Alpha estimation in natural images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR’00), 24–31.Google ScholarCross Ref
    19. Sand, P., and Teller, S. 2004. Video matching. In Proceedings of ACM SIGGRAPH 2004, 592–599. Google ScholarDigital Library
    20. Sawhney, H., Guo, Y., Hanna, K., Kumar, R., Adkins, S., and Zhou, S. 2001. Hybrid stereo camera: An ibr approach for synthesis of very high resolution stereoscopic image sequences. In Proceedings of ACM SIGGRAPH 2001, 451–460. Google ScholarDigital Library
    21. Schodl, A., Szeliski, R., Salesin, D., and Essa, I. 2000. Video textures. In Proceedings of ACM SIGGRAPH 2000, 489–498. Google ScholarDigital Library
    22. Shi, J., and Malik, J. 1998. Motion segmentation and tracking using normalized cuts. In Proceedings of International Conference on Computer Vision, 1154–1160. Google ScholarDigital Library
    23. Shi. J., and Tomasi, C. 1994. Good features to track. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR’94), 593–600.Google Scholar
    24. Strang, G. 1986. Introduction to Applied Mathematics. Wellesley-Cambridge Press.Google Scholar
    25. Torr, P. 1998. Philosophical Transactions of the Royal Society. Roy Soc, ch. Geometric Motion Segmentation and Model Selection, 1321–1340.Google Scholar
    26. Unuma, M., Anjyo, K., and Takeuchi, R. 1995. Fourier principles for emotion-based human figure animation. In Proceedings of ACM SIGGRAPH 95, 91–96. Google ScholarDigital Library
    27. Verma, D., and Meila, M. 2003. A comparison of spectral methods. Tech. Rep. UW-CSE-03-05-01, ept. of Computer Science and Engineering, University of Washington.Google Scholar
    28. Wang, J., and Adelson, E. 1994. Representing moving images with layers. IEEE Trans. Image Processing 3, 5, 625–638.Google ScholarDigital Library
    29. Weiss, Y., and Adelson, E. 1994. Perceptual organized EM: A framework for motion segmentation that combines information about form and motion. Tech. rep., MIT Media Laboratory Perceptual Computing Section Technical Report No. 315.Google Scholar
    30. Weiss, Y. 1997. Smoothness in Layers: Motion segmentation using nonparametric mixture estimation. In Proceedings of Computer Vision and Pattern Recognition, 520–527. Google ScholarDigital Library
    31. Wills, J., Agarwal, S., and Belongie, S. 2003. What went where. In Proceedings of Computer Vision and Pattern Recognition, 37–44. Google ScholarDigital Library
    32. Zelnik-Manor, L., and Irani, M. 2003. Degeneracies, dependencies and their implications in multi-body and multi-sequence factorizations. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), 287–293.Google Scholar


ACM Digital Library Publication:



Overview Page: