“Enhancing Three-Dimensional Vision With Three-Dimensional Sound” by Stampfl and Dobler

  • ©Philipp Stampfl and Daniel Dobler



Entry Number: 24


    Enhancing Three-Dimensional Vision With Three-Dimensional Sound

Course Organizer(s):



    Knowledge of basic acoustics (frequency, amplitude, etc.) and digital sound (sampling, audio formats, etc.). Familiarity with the basics of programming and experience in programming languages (C(++)) are recommended but not required.

    Intended Audience
    Developers of VR/AR systems, and anyone interested in real-time acoustics.

    A thorough introduction to three-dimensional, multi-channel sound. Three-dimensional sound has been neglected in most VR and AR applications, even though it can significantly enhance their realism and immersion. This course explains the main concepts and the most important terms, and provides a detailed overview of the currently available hardware and software. It combines theoretical and practical knowledge on how to apply these technologies in VR and AR systems.

    The course begins with a presentation of the history and development of multi-channel and 3D-sound, and an explanation of the differences between the two terms. The basics of spatial hearing and 3D-sound synthesis are explained as a second theoretical component. The next two sections deal with practical issues: an overview of current 3D-sound engines, including a comparison between those engines and a description of currently available sound hardware. In the virtualization section, the course presents a detailed description of the features and backgrounds of the techniques implemented in 3D-sound engines, including their basic spatial audio algorithms and the basics of reverberation engines. The course concludes with tips and tricks for implementation of different sound engines for different VR and AR development systems, and a brief summary of the 3deSoundBox, an external acoustic virtualization tool implemented by one of the presenters.

Overview Page: