{"id":364265,"date":"2017-02-15T17:08:27","date_gmt":"2017-02-16T01:08:27","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=364265"},"modified":"2022-01-21T13:15:43","modified_gmt":"2022-01-21T21:15:43","slug":"nn-speech-enhancement","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/nn-speech-enhancement\/","title":{"rendered":"Neural Networks-based Speech Enhancement"},"content":{"rendered":"

Summary<\/h2>\n

Decades of research in processing audio signals has led to performance saturation. However, recent advances in artificial intelligence (AI) and machine learning (ML) provides a new opportunity to advance the state-of-the-art. In this project, one of the first problems we focus on is enhancing speech signals as they are captured by microphones.\u00a0Speech enhancement is a precursor to several applications like VoIP, teleconferencing systems, speech recognition, and hearing aids. Its importance has grown further with the emergence of mobile, wearable and smart home devices, which present challenging capture and processing conditions due to their limited processing capabilities, voice-first IO interfaces and increased speaker-microphone distances. The goal of speech enhancement is to take the audio signal from a microphone,\u00a0clean<\/em>\u00a0it and forward clean audio to multiple clients such as speech-recognition software, archival databases and speakers. The process of\u00a0cleaning<\/em>\u00a0is what we focus on in this project. This has traditionally been done with statistical signal processing. However, these techniques make several assumptions that are imprecise. We explore data-driven ways of completing this task in the most efficient, dynamic and accurate manner.<\/p>\n

Speech Enhancement Challenges<\/h2>\n

Recent advances in machine learning (ML) and artificial intelligence (AI) have shown impressive results for the speech enhancement task, i.e. that it is possible to remove almost any kind of background noise, such as barking dogs, kitchen noise, music, babble, traffic and outdoor sounds, etc. This is an exciting novelty compared to traditional statistical signal processing based methods, which usually only attenuate quasi-stationary noise efficiently. However, ML-based speech enhancement is still at the very beginning of being in a mature enough state for being productized and faces the following challenges:<\/p>\n

1. Speech quality<\/strong>: While the suppression capability of AI-powered speech enhancement is impressive, speech quality is often degraded. More research is directed in improving the speech quality by improving data generation and augmentation, exploring optimization targets, and improving the network models. In one of our early works we have e.g. used convolutional-recurrent network structures for speech enhancement.
\n\"\"<\/p>\n

2. Inference efficiency<\/strong>: High audio quality is often obtained with very large neural network models, which have prohibiting high inference complexity and sometimes also processing delay. Actively trying to reduce the model size, complexity, memory footprint, and processing delay is an important part of research, to be able to run these models on resource constrained edge devices. In the past, we already explored increasing model efficiency by bit-precision scaling for speech enhancement and voice activity detection. We also investigated small recurrent networks for enhancement to ensure real-time inference constraints.<\/p>\n

\"\"<\/p>\n

3. Unsupervised learning<\/strong>: Most efficient results in ML are achieved by supervised learning. In the context of training a speech enhancement model, this means that we need to prepare a dataset with noisy and clean target speech. This has the disadvantage of large of effort in creating a robust dataset for all conditions encountered in reality. However, there will be always conditions, which have not been trained for. Unsupervised learning can potentially help to overcome this problem, as a ground truth is not required and theoretically, a model can be built to adapt to unseen noise on-the-fly. We had a first attempt using reinforcement learning to adapt a speech enhancement algorithm to the input signal using recurrent networks.<\/p>\n

\"\"<\/p>\n

Audio Quality Measurement<\/h2>\n

The performance in terms of quality and intelligibility of speech enhancement has traditionally been evaluated by distance metrics between the enhanced and target speech signal, such as PESQ, frequency-weighted SNR, STOI, and so forth. However, in contrast to enhancement for ASR, where the word error rate is a very defined single optimization criterion, these employed metrics in speech enhancement are often only correlated up to a certain extent with the actual subjective speech quality. As conducting listening tests with humans is cost- and time intensive, a lot of recent research is going towards developing ML models that try to predict speech quality. We started this work in summer 2018 developing a first speech quality predictor model for audio calls and are continuing this work to improve the accuracy. Major challenges comprise building a robust enough dataset and incorporate all kinds of degrading distortions, which can be from acoustic nature, processing or transmission artifacts. These speech quality predictors provide us powerful tools to advance speech enhancement models and ultimately, also serve as an optimization function.<\/p>\n","protected":false},"excerpt":{"rendered":"

The goal of speech enhancement is to take the audio signal from a microphone,\u00a0clean\u00a0it and forward clean audio to multiple clients such as speech-recognition software, archival databases and speakers.<\/p>\n","protected":false},"featured_media":668844,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13561,13556,243062,13547],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-364265","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-algorithms","msr-research-area-artificial-intelligence","msr-research-area-audio-acoustics","msr-research-area-systems-and-networking","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2017-01-08","related-publications":[466413,466377,466398,466422,437400,437388,347000,371972,164327,754333,916755,764146,768004,377081,768106,658848,768115,658857,810181,697996,863991,703480,864003,754294,864012,754306,864021,754324,889188],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Sebastian Braun","user_id":37688,"people_section":"Contributing researchers","alias":"sebraun"},{"type":"user_nicename","display_name":"Hannes Gamper","user_id":31943,"people_section":"Contributing researchers","alias":"hagamper"},{"type":"user_nicename","display_name":"Matthai Philipose","user_id":32834,"people_section":"Contributing researchers","alias":"matthaip"},{"type":"user_nicename","display_name":"Ivan Tashev","user_id":32127,"people_section":"Contributing researchers","alias":"ivantash"},{"type":"guest","display_name":"Viet Anh Trinh","user_id":814663,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Abu-Zaher Faridee","user_id":814657,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Ali Aroudi","user_id":814648,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Yangyang (Raymond) Xia","user_id":661758,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Anderson Avila","user_id":661752,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Han Zao","user_id":661731,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Jong Hwan Ko","user_id":661725,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Rasool Fakoor","user_id":661716,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Yan-hui Tu","user_id":661707,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Seyedmahdad Mirsamadi","user_id":661692,"people_section":"Past interns","alias":""},{"type":"guest","display_name":"Chin-Hui Lee","user_id":664389,"people_section":"Consulting researchers","alias":""}],"msr_research_lab":[199565],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/364265"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":51,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/364265\/revisions"}],"predecessor-version":[{"id":665994,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/364265\/revisions\/665994"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/668844"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=364265"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=364265"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=364265"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=364265"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=364265"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}