“Sound Scope Phone: Focusing Parts by Natural Movement” by Hamanaka
Conference:
Type(s):
Title:
- Sound Scope Phone: Focusing Parts by Natural Movement
Developer(s):
Description:
This paper describes Sound Scope Phone, an application that enables you to emphasize the part you want to listen to in a song consisting of multiple parts by head direction or hand gestures. The previously proposed interface required special headphones equipped with a digital compass and distance sensor to detect the direction of the head and distance between the head and a hand, respectively. Sound Scope Phone integrates face tracking information on the basis of images from the front camera of a commercially available smartphone with information from the built-in acceleration/gyro sensor to detect the head direction. The built application is published on the Apple App store under the name SoundScopePhone.
References:
- Camille Goudeseune and Hank Kaczmarski. 2001. Composing outdoor augmented-reality sound environments. In International Computer Music Conference. 83–86.
- Masatoshi Hamanaka. 2006. Music Scope Headphones: Natural User Interface for Selection of Music. In Proceedings of ISMIR 2006, 7th International Conference on Music Information Retrieval, Victoria, Canada, 8-12 October 2006,. 302–307.
- Masatoshi Hamanaka and SuengHee Lee. 2009. Sound Scope Headphones. In ACM SIGGRAPH 2009 Emerging Technologies (New Orleans, Louisiana) (SIGGRAPH ’09). Association for Computing Machinery, New York, NY, USA, Article 21, 1 pages. https://doi.org/10.1145/1597956.1597977
- Warusfel Oliver and Eckel Gerhard. 2004. LISTEN – Augmenting Eeveryday Environments Through Interactive Soundscapes. In Proceedings of IEEE Workshop on VR for public consumption, IEEE Virtural Reality. 268–275.
- OpenAL. n.d.. Cross Platform 3D Audio. https://www.openal.org/. (Accessed on May 25, 2022).
- François Pachet and Olivier Delerue. 1998. A Mixed 2D/3D Interface for Music Spatialization. In Virtual Worlds, Jean-Claude Heudin (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 298–307.
- Francois Pachet and Olivier Delerue. 2000. On-The-Fly Multi Track Mixing. In Proceedings of AES 109th Convention, Los Angeles. Audio Engineering Society.
- Jiann-Rong Wu, Cha-Dong Duh, Ming Ouhyoung, and Jei-Tun Wu. 1997. Head Motion and Latency Compensation on Localization of 3D Sound in Virtual Reality. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (Lausanne, Switzerland) (VRST ’97). Association for Computing Machinery, New York, NY, USA, 15–20. https://doi.org/10.1145/261135.261140