{"id":609111,"date":"2019-08-30T00:00:25","date_gmt":"2019-08-30T07:00:25","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=609111"},"modified":"2019-09-17T08:58:03","modified_gmt":"2019-09-17T15:58:03","slug":"efficient-and-perceptually-plausible-3-d-sound-for-virtual-reality","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/efficient-and-perceptually-plausible-3-d-sound-for-virtual-reality\/","title":{"rendered":"Efficient and Perceptually Plausible 3-D Sound For Virtual Reality"},"content":{"rendered":"
Due to the high computational cost of rendering 3-D graphics for virtual reality, 3-D sound rendering is typically limited to a small fraction of the total compute power. At the same time, it should be perceptually plausible and enhance the listeners’ sense of presence and immersion. One approach to meet these goals is a parametric representation of spatial sound fields that estimates perceptually relevant aspects in an offline encoding step and efficiently decodes the 3-D sound in real-time. A common parametric model includes the time, level, and direction of arrival of the first sound and early reflections, as well as a description of the late reverberation in terms of its level and decay rate. However, rendering all early reflections would be costly. In this talk, we suggest an end-to-end pipeline for the detection and rendering of perceptually relevant early reflections and evaluate its quality depending on the number of included early reflections.<\/p>\n