“SketchiMo: sketch-based motion editing for articulated characters”

  • ©Byungkuk Choi, Roger Blanco Ribera, John-Peter Lewis, Yeongho Seol, Seokpyo Hong, Haegwang Eom, Sunjin Jung, and Junyong Noh

Conference:


Type:


Title:

    SketchiMo: sketch-based motion editing for articulated characters

Session/Category Title: EXPRESSIVE ANIMATION


Presenter(s)/Author(s):


Moderator(s):



Abstract:


    We present SketchiMo, a novel approach for the expressive editing of articulated character motion. SketchiMo solves for the motion given a set of projective constraints that relate the sketch inputs to the unknown 3D poses. We introduce the concept of sketch space, a contextual geometric representation of sketch targets—motion properties that are editable via sketch input—that enhances, right on the viewport, different aspects of the motion. The combination of the proposed sketch targets and space allows for seamless editing of a wide range of properties, from simple joint trajectories to local parent-child spatiotemporal relationships and more abstract properties such as coordinated motions. This is made possible by interpreting the user’s input through a new sketch-based optimization engine in a uniform way. In addition, our view-dependent sketch space also serves the purpose of disambiguating the user inputs by visualizing their range of effect and transparently defining the necessary constraints to set the temporal boundaries for the optimization.

References:


    1. Annett, M., Anderson, F., Bischof, W. F., and Gupta, A. 2014. The pen is mightier: Understanding stylus behaviour while inking on tablets. In Proceedings of GI ’14, 193–200. Google ScholarDigital Library
    2. Assa, J., Caspi, Y., and Cohen-Or, D. 2005. Action synopsis: Pose selection and illustration. ACM Trans. Graph. 24, 3 (July), 667–676. Google ScholarDigital Library
    3. Autodesk. Maya, MotionBuilder products. www.autodesk.com.Google Scholar
    4. Autodesk. Mudbox. www.autodesk.com/mudbox.Google Scholar
    5. Blender Foundation. Blender. www.blender.org.Google Scholar
    6. Borosán, P., Jin, M., DeCarlo, D., Gingold, Y., and Nealen, A. 2012. Rigmesh: Automatic rigging for part-based shape modeling and deformation. ACM Trans. Graph. 31, 6 (Nov.), 198:1–198:9. Google ScholarDigital Library
    7. Brand, M., and Hertzmann, A. 2000. Style machines. In Proceedings of SIGGRAPH ’00, 183–192. Google ScholarDigital Library
    8. Choi, M. G., Yang, K., Igarashi, T., Mitani, J., and Lee, J. 2012. Retrieval and visualization of human motion data via stick figures. Comput. Graph. Forum 31, 7pt1 (Sept.), 2057–2065. Google ScholarDigital Library
    9. Coleman, P., Bibliowicz, J., Singh, K., and Gleicher, M. 2008. Staggered poses: A character motion representation for detail-preserving editing of pose and coordinated timing. In Proceedings of SCA ’08, 137–146. Google ScholarDigital Library
    10. De Paoli, C., and Singh, K. 2015. Secondskin: Sketch-based construction of layered 3d models. ACM Trans. Graph. 34, 4 (July), 126:1–126:10. Google ScholarDigital Library
    11. Gleicher, M. 1997. Motion editing with spacetime constraints. In Proceedings of I3D ’97, 139–148. Google ScholarDigital Library
    12. Gleicher, M. 1998. Retargetting motion to new characters. In Proceedings of SIGGRAPH ’98, 33–42. Google ScholarDigital Library
    13. Gleicher, M. 2001. Motion path editing. In Proceedings of I3D ’01, 195–202. Google ScholarDigital Library
    14. Grochow, K., Martin, S. L., Hertzmann, A., and Popović, Z. 2004. Style-based inverse kinematics. ACM Trans. Graph. 23, 3 (Aug.), 522–531. Google ScholarDigital Library
    15. Guay, M., Cani, M.-P., and Ronfard, R. 2013. The line of action: An intuitive interface for expressive character posing. ACM Trans. Graph. 32, 6 (Nov.), 205:1–205:8. Google ScholarDigital Library
    16. Guay, M., Ronfard, R., Gleicher, M., and Cani, M.-P. 2015. Adding dynamics to sketch-based character animations. In Proceedings of SBIM ’15, 27–34. Google ScholarDigital Library
    17. Guay, M., Ronfard, R., Gleicher, M., and Cani, M.-P. 2015. Space-time sketching of character animation. ACM Trans. Graph. 34, 4 (July), 118:1–118:10. Google ScholarDigital Library
    18. Hahn, F., Mutzel, F., Coros, S., Thomaszewski, B., Nitti, M., Gross, M., and Sumner, R. W. 2015. Sketch abstractions for character posing. In Proceedings of SCA ’15, 185–191. Google ScholarDigital Library
    19. Ho, E. S. L., Komura, T., and Tai, C.-L. 2010. Spatial relationship preserving character motion adaptation. ACM Trans. Graph. 29, 4 (July), 33:1–33:8. Google ScholarDigital Library
    20. Hsu, E., Pulli, K., and Popović, J. 2005. Style translation for human motion. ACM Trans. Graph. 24, 3 (July), 1082–1089. Google ScholarDigital Library
    21. Hsu, E., da Silva, M., and Popović, J. 2007. Guided time warping for motion editing. In Proceedings of SCA ’07, 45–52. Google ScholarDigital Library
    22. Igarashi, T., Matsuoka, S., and Tanaka, H. 1999. Teddy: A sketching interface for 3d freeform design. In Proceedings of SIGGRAPH ’99, 409–416. Google ScholarDigital Library
    23. Johnson, S. G., 2010. The nlopt nonlinear-optimization package. http://ab-initio.mit.edu/nlopt.Google Scholar
    24. Kho, Y., and Garland, M. 2005. Sketching mesh deformations. ACM Trans. Graph. 24, 3 (July), 934–934. Google ScholarDigital Library
    25. Kim, M., Hyun, K., Kim, J., and Lee, J. 2009. Synchronized multi-character motion editing. ACM Trans. Graph. 28, 3 (July), 79:1–79:9. Google ScholarDigital Library
    26. Lasseter, J. 1987. Principles of traditional animation applied to 3d computer animation. Comput. Graph. 21, 4 (Aug.), 35–44. Google ScholarDigital Library
    27. Le Callennec, B., and Boulic, R. 2006. Robust kinematic constraint detection for motion data. In Proceedings of SCA ’06, 281–290. Google ScholarDigital Library
    28. Lee, J., and Shin, S. Y. 1999. A hierarchical approach to interactive motion editing for human-like figures. In Proceedings of SIGGRAPH ’99, 39–48. Google ScholarDigital Library
    29. Lee, J. 2008. Representing rotations and orientations in geometric computing. IEEE Comput. Graph. Appl. 28, 2 (Mar.), 75–83. Google ScholarDigital Library
    30. Levi, Z., and Gotsman, C. 2013. ArtiSketch: A system for articulated sketch modeling. Comput. Graph. Forum 32, 2pt2 (May), 235–244.Google ScholarCross Ref
    31. Lin, J., Igarashi, T., Mitani, J., and Saul, G. 2010. A sketching interface for sitting-pose design. In Proceedings of SBIM ’10, 111–118. Google ScholarDigital Library
    32. McLaughlin, T., Cutler, L., and Coleman, D. 2011. Character rigging, deformations, and simulations in film and game production. In SIGGRAPH ’11 Courses, 5:1–5:18. Google ScholarDigital Library
    33. Mukai, T., and Kuriyama, S. 2009. Pose-timeline for propagating motion edits. In Proceedings of SCA ’09, 113–122. Google ScholarDigital Library
    34. Nealen, A., Sorkine, O., Alexa, M., and Cohen-Or, D. 2005. A sketch-based interface for detail-preserving mesh editing. ACM Trans. Graph. 24, 3 (July), 1142–1147. Google ScholarDigital Library
    35. Neff, M., and Fiume, E. 2003. Aesthetic edits for character animation. In Proceedings of SCA ’03, 239–244. Google ScholarDigital Library
    36. Nocedal, J., and Wright, S. 2006. Numerical optimization. Springer Science & Business Media.Google Scholar
    37. Öztireli, A. C., Baran, I., Popa, T., Dalstein, B., Sumner, R. W., and Gross, M. 2013. Differential blending for expressive sketch-based posing. In Proceedings of SCA ’13, 155–164. Google ScholarDigital Library
    38. Pixologic. Zbrush. www.pixologic.com.Google Scholar
    39. Popović, Z., and Witkin, A. 1999. Physically based motion transformation. In Proceedings of SIGGRAPH ’99, 11–20. Google ScholarDigital Library
    40. Popović, J., Seitz, S. M., and Erdmann, M. 2003. Motion sketching for control of rigid-body simulations. ACM Trans. Graph. 22, 4 (Oct.), 1034–1054. Google ScholarDigital Library
    41. Rose, C., Cohen, M. F., and Bodenheimer, B. 1998. Verbs and adverbs: Multidimensional motion interpolation. IEEE Comput. Graph. Appl. 18, 5 (Sept.), 32–40. Google ScholarDigital Library
    42. Schmid, J., Senn, M. S., Gross, M., and Sumner, R. W. 2011. Overcoat: An implicit canvas for 3d painting. ACM Trans. Graph. 30, 4 (July), 28:1–28:10. Google ScholarDigital Library
    43. Shapiro, A., Cao, Y., and Faloutsos, P. 2006. Style components. In Proceedings of GI ’06, 33–39. Google ScholarDigital Library
    44. Takayama, K., Panozzo, D., Sorkine-Hornung, A., and Sorkine-Hornung, O. 2013. Sketch-based generation and editing of quad meshes. ACM Trans. Graph. 32, 4 (July), 97:1–97:8. Google ScholarDigital Library
    45. Tassa, Y., Erez, T., and Todorov, E. 2012. Synthesis and stabilization of complex behaviors through online trajectory optimization. Intelligent Robots and Systems (Oct.), 4906–4913.Google Scholar
    46. Terra, S. C. L., and Metoyer, R. A. 2004. Performance timing for keyframe animation. In Proceedings of SCA ’04, 253–258. Google ScholarDigital Library
    47. Thorne, M., Burke, D., and van de Panne, M. 2004. Motion doodles: An interface for sketching character motion. ACM Trans. Graph. 23, 3 (Aug.), 424–431. Google ScholarDigital Library
    48. Wang, J., Drucker, S. M., Agrawala, M., and Cohen, M. F. 2006. The cartoon animation filter. ACM Trans. Graph. 25, 3 (July), 1169–1173. Google ScholarDigital Library
    49. Wei, X., and Chai, J. 2011. Intuitive interactive human-character posing with millions of example poses. IEEE Comput. Graph. Appl. 31, 4 (July), 78–88. Google ScholarDigital Library
    50. Witkin, A., and Popovic, Z. 1995. Motion warping. In Proceedings of SIGGRAPH ’95, 105–108. Google ScholarDigital Library
    51. Yoo, I., Vanek, J., Nizovtseva, M., Adamo-Villani, N., and Benes, B. 2014. Sketching human character animations by composing sequences from large motion database. Vis. Comput. 30, 2 (Feb.), 213–227. Google ScholarDigital Library
    52. Yoo, I., Massih, M. A., Ziamtsov, I., Hassan, R., and Benes, B. 2015. Motion retiming by using bilateral time control surfaces. Computers & Graphics 47, 59–67. Google ScholarDigital Library


ACM Digital Library Publication:



Overview Page: