{"id":651183,"date":"2020-04-19T14:51:28","date_gmt":"2020-04-19T21:51:28","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=651183"},"modified":"2021-09-27T10:38:43","modified_gmt":"2021-09-27T17:38:43","slug":"fast-acoustic-scattering-using-convolutional-neural-networks","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/fast-acoustic-scattering-using-convolutional-neural-networks\/","title":{"rendered":"Fast acoustic scattering using convolutional neural networks"},"content":{"rendered":"
Diffracted scattering and occlusion are important acoustic effects in interactive auralization and noise control applications, typically requiring expensive numerical simulation. We propose training a convolutional neural network to map from a convex scatterer’s cross-section to a 2D slice of the resulting spatial loudness distribution. We show that employing a full-resolution residual network for the resulting image-to-image regression problem yields spatially detailed loudness fields with a root-mean-squared error of less than 1 dB, at over 100x speedup compared to full wave simulation.<\/p>\n