“Becoming: An Interactive Musical Journey in VR” by Yadegari, Burnett, Murakami, Pisha, Talenti, et al. …


Notice: Pod Template PHP code has been deprecated, please use WP Templates instead of embedding PHP. has been deprecated since Pods version 2.3 with no alternative available. in /data/siggraph/websites/history/wp-content/plugins/pods/includes/general.php on line 518
  • ©Shahrokh Yadegari, John Burnett, Eito Murakami, Louis Pisha, Francesca Talenti, and Juliette Regimbal

Conference:


  • SIGGRAPH 2022
  • More from SIGGRAPH 2022:
    Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345
     
    Notice: Array to string conversion in /data/siggraph/websites/history/wp-content/plugins/siggraph-archive-plugin/src/next_previous/source.php on line 345

Type(s):


Entry Number: 02

Title:


    Becoming: An Interactive Musical Journey in VR

Program Title:


    Immersive Pavilion

Presenter(s):



Description:


    Becoming is an operatic VR piece based on a Persian poem by Mowlana Rumi. The critical content of the piece is about the spiritual evolution of humans on Earth. The artistic expression of the piece takes advantage of an advanced ray-tracing audio spatialization system (Space3D), which is capable of creating realistic spatial impressions within changing acoustic environments in real-time. In this piece the user can interact with the environment and influence the progression of the music by touching various elements and by changing the spatialization paths and speeds of various layers of the music. Two audience members can be connected through the network and interact with each other via haptic effects.

References:


    1. Alexandra Covaci, Longhao Zou, Irina Tal, Gabriel-Miro Muntean, and Gheorghita Ghinea. 2018. Is Multimedia Multisensorial? – A Review of Mulsemedia Systems. ACM Comput. Surv. 51, 5, Article 91 (sep 2018), 35 pages. https://doi.org/10.1145/3233774
    2. Antoine Echelard, Jacques Lévy Véhel, and Olivier Barrière. 2010. Terrain Modeling with Multifractional Brownian Motion and Self-regulating Processes. In Computer Vision and Graphics, Leonard Bolc, Ryszard Tadeusiewicz, Leszek J. Chmielewski, and Konrad Wojciechowski (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 342–351.
    3. Miguel Melo, Guilherme Gonçalves, Pedro Monteiro, Hugo Coelho, José Vasconcelos-Raposo, and Maximino Bessa. 2022. Do Multisensory Stimuli Benefit the Virtual Reality Experience? A Systematic Review. IEEE Transactions on Visualization and Computer Graphics 28, 2(2022), 1428–1442. https://doi.org/10.1109/TVCG.2020.3010088
    4. F Richard Moore. 1983. A general model for spatial processing of sounds. Computer Music Journal 7, 3 (1983), 6–15.
    5. Louis Pisha, Siddharth Atre, John Burnett, and Shahrokh Yadegari. 2020. Approximate diffraction modeling for real-time sound propagation simulation. The Journal of the Acoustical Society of America 148, 4 (2020), 1922–1933.
    6. U Peter Svensson and Paul T Calamia. 2006. Edge-diffraction impulse responses near specular-zone and shadow-zone boundaries. Acta acustica united with acustica 92, 4 (2006), 501–512.
    7. Shahrokh Yadegari. 2005. Inner Room Extension of a General Model for Spatial Processing of sounds. In International Computer Music Conference. International Computer Music Association, San Francisco, CA, 806–809.

ACM Digital Library Publication:



Overview Page: