Changes

Jump to: navigation, search

VR audio

3,882 bytes added, 17:11, 22 September 2017
no edit summary
While HRTFs help identify a sound’s direction, they do not model the localization of distance. Several factors affect how humans infer the distance to a sound source, which can be simulated with different levels of accuracy and computational cost. These are loudness, initial time delay, direct vs. reverberant sound, motion parallax, and high-frequency attenuation. <ref name=”3”></ref>
 
==Google and Valve’s VR audio==
Google uses a technology called ambisonics to simulate sounds coming from virtual objects. The system surrounds the user with a high number of virtual loudspeakers that reproduce sound waves coming from all directions in the VR environment. The accuracy of the synthesized sound waves is directly proportional to the number of virtual loudspeakers. These are generated through the use of HRTFs. <ref name=”7”></ref>
 
[[Valve]] as made available the [[Steam]] Audio SDK - a free option for developers who want to use VR audio in their VR apps. Steam Audio supports unity and [[Unreal Engine]], and is available for Windows, Linux, MacOS, and Android. Furthermore, it is not restricted to a specific VR device or Steam. In a statement released by Valve, they say that “Steam Audio is an advanced spatial audio solution that uses physics-based sound propagation in addition to HRTF-based binaural audio for increased immersion. Spatial audio significantly improves immersion in VR; adding physics-based sound propagation further improves the experience by consistently recreating how sound interacts with the virtual environment.” <ref name=”4”></ref>
 
==History==
Before the current emergence of VR, the interest in 3D audio was relatively low. Although sound has consistently improved over the years in terms of fidelity and signal-to-noise ratio, the real-time modeling of sound in a 3D space has not experienced the same level of consistent development. The true challenge for VR audio has been “reproducing the dynamic behavior of sound in a 3D space in real time.” The sound source and listener have to be computed in a 3D space (spatialization), so that has their positions change, the prerecorded audio sample sounds are also altered to adjust to the new spatial positions. Beside spatialization, the system also has to take into account the modifications made to a sound while it travels through an environment. The sound can be reflected, absorbed, blocked, or echo. These effects on the sound are called audio ambiance and accounting for all these effects becomes computationally intensive. <ref name=”1”></ref>
 
The capacity to create immersive, realistic VR audio already existed in the 1990s, with a technology called A3D 2.0, developed by a company called Aureal. Mark Chase, in an article written for PC Gamer, said that “much of this technology relied on head-related transfer functions (or HRTFs), mathematical algorithms that take into account how sound from a 3D source enters the head based on ear and upper-body shape. This essentially helps replicate the auditory cues that allow us to pinpoint, or localize, where a sound is coming from.” <ref name=”1”></ref>
 
The development of 3D audio would be affected by a legal action from Creative against Aureal for patent infringement. The cost of the legal action damaged Aureal financially, leaving the company to crippled to continue. Creative would then continue research on 3D audio, built on the backbone of DirectSound and DirectSound3D. <ref name=”1”></ref>
 
DirectSound and DirectSound3D created a standardized, unified environment for 3D audio, helping it grow as a technology and be easily used by developers. It also allowed for the hardware acceleration of 3D sound. When Microsoft released Windows Vista, it stopped supporting DirectSound3D, affecting years of development by Creative. <ref name=”1”></ref>
 
But with the advent of VR, the necessity of VR audio that can truly simulate natural sound has become a research priority. In 2014, Oculus licensed VisiSonic’s ReaSpace 3D audio technology, incorporating it into the Oculus Audio SDK. This technology follows the same principle that Aureal’s system used decades before, relying on custom HRTFs to recreate accurate spatialization over headphones. <ref name=”1”></ref>
==Microphones==
349
edits

Navigation menu