“ChoreoMaster: choreography-oriented music-driven dance synthesis” by Chen, Tand, Lei, Zhang, Guo, et al. …

  • ©Kang Chen, Zhipeng Tand, Jin Lei, Song-Hai Zhang, Yuan-Chen Guo, Weidong Zhang, and Shi-Min Hu

Conference:


Type:


Title:

    ChoreoMaster: choreography-oriented music-driven dance synthesis

Presenter(s)/Author(s):



Abstract:


    Despite strong demand in the game and film industry, automatically synthesizing high-quality dance motions remains a challenging task. In this paper, we present ChoreoMaster, a production-ready music-driven dance motion synthesis system. Given a piece of music, ChoreoMaster can automatically generate a high-quality dance motion sequence to accompany the input music in terms of style, rhythm and structure. To achieve this goal, we introduce a novel choreography-oriented choreomusical embedding framework, which successfully constructs a unified choreomusical embedding space for both style and rhythm relationships between music and dance phrases. The learned choreomusical embedding is then incorporated into a novel choreography-oriented graph-based motion synthesis framework, which can robustly and efficiently generate high-quality dance motions following various choreographic rules. Moreover, as a production-ready system, ChoreoMaster is sufficiently controllable and comprehensive for users to produce desired results. Experimental results demonstrate that dance motions generated by ChoreoMaster are accepted by professional artists.

References:


    1. Omid Alemi, Jules Françoise, and Philippe Pasquier. 2017. GrooveNet: Real-time music-driven dance movement generation using artificial neural networks. networks 8, 17 (2017), 26.Google Scholar
    2. Okan Arikan and David A Forsyth. 2002. Interactive motion generation from examples. ACM Trans. Graph. 21, 3 (2002), 483–490.Google ScholarDigital Library
    3. Alexander Berman and Valencia James. 2015. Kinetic imaginations: exploring the possibilities of combining AI and dance. In Proc. of IJCAI.Google Scholar
    4. Sebastian Böck and Gerhard Widmer. 2013. Maximum filter vibrato suppression for onset detection. In In Proc. of the Int. Conf. on DAFx. (2013), Vol. 7.Google Scholar
    5. Keunwoo Choi, György Fazekas, Mark Sandler, and Kyunghyun Cho. 2017. Convolutional recurrent neural networks for music classification. In In Proc. of ICASSP. IEEE, 2392–2396.Google ScholarDigital Library
    6. Abe Davis and Maneesh Agrawala. 2018. Visual Rhythm and Beat. ACM Trans. Graph. 37, 4, Article 122 (July 2018), 11 pages.Google ScholarDigital Library
    7. Yinglin Duan, Tianyang Shi, Zhengxia Zou, Jia Qin, Yifei Zhao, Yi Yuan, Jie Hou, Xiang Wen, and Changjie Fan. 2020. Semi-Supervised Learning for In-Game Expert-Level Music-to-Dance Translation. arXiv preprint arXiv:2009.12763 (2020).Google Scholar
    8. Rukun Fan, Songhua Xu, and Weidong Geng. 2011. Example-based automatic music-driven conventional dance motion synthesis. IEEE TVCG 18, 3 (2011), 501–515.Google Scholar
    9. João P Ferreira, Thiago M Coutinho, Thiago L Gomes, José F Neto, Rafael Azevedo, Renato Martins, and Erickson R Nascimento. 2020. Learning to dance: A graph convolutional adversarial network to generate realistic dance motions from audio. Computers & Graphics 94 (2020), 11–21.Google ScholarCross Ref
    10. G David Forney. 1973. The Viterbi algorithm. Proc. of the IEEE 61, 3 (1973), 268–278.Google ScholarCross Ref
    11. Satoru Fukayama and Masataka Goto. 2015. Music content driven automated choreography with beat-wise motion connectivity constraints. In Proc. of SMC (2015), 177–183.Google Scholar
    12. Mikel Gainza. 2009. Automatic musical meter detection. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 329–332.Google ScholarDigital Library
    13. Radek Grzeszczuk, Demetri Terzopoulos, and Geoffrey Hinton. 1998. Neuroanimator: Fast neural network emulation and control of physics-based models. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques. 9–20.Google ScholarDigital Library
    14. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. Advances in neural information processing systems 30 (2017), 6626–6637.Google Scholar
    15. Daniel Holden, Jun Saito, and Taku Komura. 2016. A Deep Learning Framework for Character Motion Synthesis and Editing. ACM Trans. Graph. 35, 4, Article 138 (July 2016), 11 pages.Google ScholarDigital Library
    16. Jae Woo Kim, Hesham Fouad, and James K Hahn. 2006. Making Them Dance.. In AAAI Fall Symposium: Aurally Informed Performance, Vol. 2.Google Scholar
    17. Tae-hoon Kim, Sang Il Park, and Sung Yong Shin. 2003. Rhythmic-Motion Synthesis Based on Motion-Beat Analysis. ACM Trans. Graph. 22, 3 (July 2003), 392–401.Google Scholar
    18. Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2002. Motion Graphs. ACM Trans. Graph. 21, 3 (July 2002), 473–482.Google ScholarDigital Library
    19. Alexis Lamouret and Michiel van de Panne. 1996. Motion Synthesis By Example. In Computer Animation and Simulation ’96, Ronan Boulic and Gerard Hégron (Eds.). Springer Vienna, Vienna, 199–212.Google Scholar
    20. Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, and Jan Kautz. 2019. Dancing to Music. In Advances in NIPS. 3581–3591.Google Scholar
    21. Jehee Lee, Jinxiang Chai, Paul SA Reitsma, Jessica K Hodgins, and Nancy S Pollard. 2002. Interactive control of avatars animated with human motion data. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques. 491–500.Google ScholarDigital Library
    22. Minho Lee, Kyogu Lee, and Jaeheung Park. 2013. Music similarity-based approach to generating dance motion sequence. Multimedia tools and applications 62, 3 (2013), 895–912.Google Scholar
    23. Sung-Hee Lee and Demetri Terzopoulos. 2006. Heads up! Biomechanical Modeling and Neuromuscular Control of the Neck. ACM Trans. Graph. 25, 3, 1188–1198.Google ScholarDigital Library
    24. Ruilong Li, Shan Yang, David A. Ross, and Angjoo Kanazawa. 2021. Learn to Dance with AIST++: Music Conditioned 3D Dance Generation. arXiv:cs.CV/2101.08779Google Scholar
    25. Adriano Manfrè, Ignazio Infantino, Filippo Vella, and Salvatore Gaglio. 2016. An automatic system for humanoid dance creation. Biologically Inspired Cognitive Architectures 15 (2016), 1–9.Google ScholarCross Ref
    26. Paul H Mason. 2012. Music, dance and the total art work: choreomusicology in theory and practice. Research in dance education 13, 1 (2012), 5–24.Google Scholar
    27. Mohd Anis Md Nor and Kendra Stepputat. 2016. Sounding the Dance, Moving the Music: Choreomusicological Perspectives on Maritime Southeast Asian Performing Arts. Routledge.Google Scholar
    28. Ferda Ofli, Yasemin Demir, Yücel Yemez, Engin Erzin, A Murat Tekalp, Koray Balcı, İdil Kızoğlu, Lale Akarun, Cristian Canton-Ferrer, Joëlle Tilmanne, et al. 2008. An audio-driven dancing avatar. Journal on Multimodal User Interfaces 2, 2 (2008), 93–103.Google ScholarCross Ref
    29. Ferda Ofli, Engin Erzin, Yücel Yemez, and A Murat Tekalp. 2011. Learn2dance: Learning statistical music-to-dance mappings for choreography synthesis. IEEE TMM 14, 3 (2011), 747–759.Google Scholar
    30. Xuanchi Ren, Haoran Li, Zijian Huang, and Qifeng Chen. 2020. Self-supervised Dance Video Synthesis Conditioned on Music. In In Proc. of the 28th ACM MM. 46–54.Google Scholar
    31. Joan Serra, Meinard Müller, Peter Grosche, and Josep Lluis Arcos. 2012. Unsupervised detection of music boundaries by time series structure features. In AAAI.Google Scholar
    32. Joan Serra, Meinard Müller, Peter Grosche, and Josep Ll Arcos. 2014. Unsupervised music structure annotation by time series structure features and segment similarity. IEEE TMM 16, 5 (2014), 1229–1240.Google Scholar
    33. Takaaki Shiratori and Katsushi Ikeuchi. 2008. Synthesis of dance performance based on analyses of human motion and music. Information and Media Technologies 3, 4 (2008), 834–847.Google Scholar
    34. Takaaki Shiratori, Atsushi Nakazawa, and Katsushi Ikeuchi. 2006. Dancing-to-music character animation. In Computer Graphics Forum, Vol. 25. Wiley Online Library, 449–458.Google Scholar
    35. Guofei Sun, Yongkang Wong, Zhiyong Cheng, Mohan S Kankanhalli, Weidong Geng, and Xiangdong Li. 2020. DeepDance: Music-to-Dance Motion Choreography with Adversarial Learning. IEEE TMM (2020).Google Scholar
    36. Taoran Tang, Jia Jia, and Hanyang Mao. 2018. Dance with melody: An lstm-autoencoder approach to music-oriented dance synthesis. In Proc. of the ACM MM. 1598–1606.Google ScholarDigital Library
    37. Junyuan Xie, Ross Girshick, and Ali Farhadi. 2016. Unsupervised deep embedding for clustering analysis. In International conference on machine learning. 478–487.Google ScholarDigital Library
    38. Sijie Yan, Zhizhong Li, Yuanjun Xiong, Huahan Yan, and Dahua Lin. 2019. Convolutional sequence generation for skeleton-based action synthesis. In Proc. of the IEEE ICCV. 4394–4402.Google ScholarCross Ref
    39. Yanzhe Yang, Jimei Yang, and Jessica Hodgins. 2020. Statistics-based Motion Synthesis for Social Conversations. In Computer Graphics Forum, Vol. 39. Wiley Online Library, 201–212.Google Scholar
    40. Zijie Ye, Haozhe Wu, Jia Jia, Yaohua Bu, Wei Chen, Fanbo Meng, and Yanfeng Wang. 2020. ChoreoNet: Towards Music to Dance Synthesis with Choreographic Action Unit. Association for Computing Machinery, New York, NY, USA, 744–752.Google Scholar
    41. Wenlin Zhuang, Congyi Wang, Siyu Xia, Jinxiang Chai, and Yangang Wang. 2020. Music2Dance: DanceNet for Music-driven Dance Generation. arXiv:cs.CV/2002.03761Google Scholar


ACM Digital Library Publication:



Overview Page: