{"id":835873,"date":"2022-04-18T04:10:54","date_gmt":"2022-04-18T11:10:54","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=835873"},"modified":"2022-04-18T04:13:14","modified_gmt":"2022-04-18T11:13:14","slug":"fastcorrect-the-fast-error-correction-model-for-speech-recognition","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/fastcorrect-the-fast-error-correction-model-for-speech-recognition\/","title":{"rendered":"FastCorrect: the fast error correction model for speech recognition"},"content":{"rendered":"\n

Error correction is an important post-processing method in speech recognition that aims to detect and correct errors in speech recognition results, thereby further improving the accuracy of speech recognition. Many error correction models use autoregressive models with high latency, but speech recognition services have strict requirements on the latency of the models. In real-time speech recognition scenarios, autoregressive error correction models cannot be used for online deployment.<\/p>\n\n\n\n

In order to speed up the error correction model in speech recognition, researchers at Microsoft Research Asia and Microsoft Azure Speech proposed FastCorrect, a non-autoregressive error correction model based on Edit Alignment that can speed up the autoregressive model by six to nine times while maintaining comparable error correction ability. Because speech recognition models can often deliver multiple alternative recognition results, the researchers further proposed FastCorrect 2, where the multiple results are used to confirm each other and improve performance. The research papers on FastCorrect 1 and 2 had been accepted by NeurIPS 2021 and EMNLP 2021 respectively.<\/p>\n\n\n\n

FastCorrect<\/h2>\n\n\n\n

Edit alignment<\/h3>\n\n\n\n

FastCorrect leverages non-autoregressive generation with edit alignment to speed up the inference of the autoregressive correction model. In FastCorrect, researchers first calculate the edit distance between the recognized text (source sentence) and the ground-truth text (target sentence). Since the source and target tokens are aligned monotonically in Automatic Speech Recognition (ASR) (unlike shuffle error in neural machine translation), by analyzing the insertion, deletion and substitution operations in the edit distance, the number of target tokens that correspond to each source token after editing (i.e., 0 means deletion, 1 means unchanged or substitution, and \u22652 means insertion) could be obtained, as shown in Figure 1. In some cases, there would be several possible alignments of a source-target sentence pair, and the final alignment based on the path match score (the number of matched tokens in alignment) and frequency score (reflecting confidence of alignment in the language model) would be chosen.<\/p>\n\n\n\n

\"Figure
Figure 1: Illustration of the edit alignment between a source sentence \u201cB B D E F\u201d and a target sentence \u201cA B C D F\u201d.<\/figcaption><\/figure>\n\n\n\n

Model architecture<\/h3>\n\n\n\n

FastCorrect adopts a non-autoregressive encoder-decoder structure with a length predictor to bridge the length mismatch between the encoder (source sentence) and decoder (target sentence). As shown in Figure 2, the encoder takes the source sentence as input and outputs a hidden sequence that is: 1) fed into a length predictor to predict the number of target tokens corresponding to each source token (i.e., the edit alignment obtained in the previous subsection), and 2) used by the decoder through encoder-decoder attention. The label of the length predictor is obtained from edit alignment, and the detailed architecture of the length predictor is shown in the right sub-figure of Figure 2.<\/p>\n\n\n\n

\"diagram\"
Figure 2: Model architecture of FastCorrect.<\/figcaption><\/figure>\n\n\n\n

Experimental results<\/h3>\n\n\n\n

Researchers reported the accuracy and latency of different error correction models on AISHELL-1 and on an internal dataset, as shown in Table 1. We made several observations: 1) The autoregressive (AR) correction model can reduce the word error rate (WER) (measured by word error rate reduction (WERR)) of the ASR model by 15.53% and 8.50% respectively on the AISHELL-1 test set and the internal dataset. 2) LevT, a typical non-autoregressive model from NMT, achieves minor WERR on AISHELL-1 and even leads to WER increase on the internal dataset. Meanwhile, LevT can only speed up the inference of the AR model by two to three times on GPU\/CPU. 3) FELIX only achieves 4.14% WERR on AISHELL-1 and 0.27% WERR on the internal dataset, which is significantly worse than FastCorrect, although the inference speedup is similar. 4) FastCorrect model speeds up the inference of the AR model by six to nine times on the two datasets on GPU\/CPU and achieves 8-14% WERR, nearly comparable with the AR correction model in accuracy.<\/p>\n\n\n\n

\"table\"
Table 1: The correction accuracy and inference latency of different correction models. Researchers report the WER, WERR, and latency of the autoregressive (AR) and non-autoregressive (NAR) models (FastCorrect, LevT and FELIX).<\/figcaption><\/figure>\n\n\n\n

FastCorrect 2<\/h2>\n\n\n\n

The key challenge in ASR error correction is to detect and correct the error tokens. Because beam search is commonly used in ASR, multiple candidates are usually generated and available for error correction. We argue that multiple candidates can carry out the voting effect, which means that tokens from multiple sentences can conduct verification with each other.<\/p>\n\n\n\n

For example, if the beam search candidates are the three sentences \u201cI have cat,\u201d \u201cI have hat,\u201d and \u201cI have bat,\u201d then the first two tokens of the three sentences are likely to be correct since they are the same among all beam candidates. The inconsistency on the last token shows that: 1) this token may need correction, and 2) the pronunciation of the ground-truth token may end with “\u00e6t”. The voting effect can be utilized to boost ASR correction by helping the model detect the error token and giving clues about the pronunciation of the ground-truth token.<\/p>\n\n\n\n

In order to better make use of the voting effect, researchers proposed special designs in the alignment algorithm and the model architecture.<\/p>\n\n\n\n

Pronunciation-based alignment<\/h3>\n\n\n\n

Since the lengths of multiple candidates usually vary and the tokens from different sentences are not aligned by position, it is non-trivial to align these candidates by tokens in order to employ the voting effect. If we simply use left or right padding to ensure the same length for alignment, the information of each position in different candidates does not align, and so the voting effect is not viable. To take advantage of the voting effect, researchers proposed a novel alignment algorithm based on token matching score and pronunciation similarity score that can ensure the tokens on the same position are matched as much as possible or the pronunciations of tokens on the same position are as similar as possible if tokens are not matched.<\/p>\n\n\n\n

As shown in Figure 3, compared with the na\u00efve alignment method (padding to right), the new alignment method can: 1) align the same tokens (\u201cB\u201d, \u201cD\u201d and \u201cF\u201d) at the same position, 2) isolate the additional token occurring only in one candidate (\u201cC\u201d), and 3) keep the pronunciation similarity of the tokens on the same position as high as possible.<\/p>\n\n\n\n

\"figure\"
Figure 3: The proposed alignment method versus the na\u00efve padding method to align multiple candidates.<\/figcaption><\/figure>\n\n\n\n

Candidate predictor to select candidate for decoder<\/h3>\n\n\n\n

There may be multiple candidates (the same as the number of  source sentences), but the decoder can only take one adjusted source sentence as the input. (Since the predicted duration might be different in different candidates during inference, it is unviable to feed all adjusted candidates with different lengths into the decoder.) Thus, it is necessary to choose the appropriate source sentence to adjust and take as input to the decoder. Researchers therefore designed a candidate predictor to decide on the most appropriate source sentence.  Specifically, researchers want to choose the candidate that can yield the smallest loss (i.e., the easiest candidate to correct) in the correction model.<\/p>\n\n\n\n

As shown in Figure 4, the aligned beam search results are concatenated along each position, reshaped by a linear layer, and then fed into the encoder. The encoder output is concatenated with the original token embedding and fed into the predictor to predict the duration of each source token (by a duration predictor) and the loss of candidates (by a candidate predictor). The source token is adjusted according to the duration predictor and then fed into the decoder. Finally, the loss of the decoder is used as the label of the candidate predictor.<\/p>\n\n\n\n

\"figure\"
Figure 4: Model architecture of FastCorrect 2.<\/figcaption><\/figure>\n\n\n\n

Experimental results<\/h3>\n\n\n\n

Table 2 shows the correction accuracy and inference latency of different correction models, based on which we have made the following observations:<\/p>\n\n\n\n

Compared with the FastCorrect baseline, FastCorrect 2 can improve correction accuracy by 2.55% and 3.22% in terms of WER reduction on AISHELL-1 and the internal dataset, respectively, which shows the effectiveness of utilizing information on multiple candidates. Moreover, FastCorrect 2 is five times faster than the autoregressive model, indicating inference efficiency.<\/p>\n\n\n\n

\"table\"
Table 2: The correction accuracy and inference latency of different correction models. Researchers report the WER, WERR, and latency of the autoregressive (AR) and non-autoregressive (NAR) models (FastCorrect and FastCorrect 2).<\/figcaption><\/figure>\n\n\n\n

Both FastCorrect and FastCorrect 2 are open sourced here: https:\/\/github.com\/microsoft\/NeuralSpeech (opens in new tab)<\/span><\/a>. Researchers are developing FastCorrect 3 for better correction accuracy under fast inference speed.<\/p>\n\n\n\n

Paper Link\uff1a<\/p>\n\n\n\n

FastCorrect\uff1aFast Error Correction with Edit Alignment for Automatic Speech Recognition<\/p>\n\n\n\n

https:\/\/arxiv.org\/abs\/2105.03842 (opens in new tab)<\/span><\/a><\/p>\n\n\n\n

FastCorrect 2\uff1aFast Error Correction on Multiple Candidates for Automatic Speech Recognition<\/p>\n\n\n\n

https:\/\/arxiv.org\/abs\/2109.14420 (opens in new tab)<\/span><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

Error correction is an important post-processing method in speech recognition that aims to detect and correct errors in speech recognition results, thereby further improving the accuracy of speech recognition. Many error correction models use autoregressive models with high latency, but speech recognition services have strict requirements on the latency of the models. In real-time speech […]<\/p>\n","protected":false},"author":34512,"featured_media":835906,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr-content-parent":199560,"footnotes":""},"research-area":[],"msr-locale":[268875],"class_list":["post-835873","msr-blog-post","type-msr-blog-post","status-publish","has-post-thumbnail","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":199560,"type":"lab"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/835873"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/34512"}],"version-history":[{"count":8,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/835873\/revisions"}],"predecessor-version":[{"id":835918,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/835873\/revisions\/835918"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/835906"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=835873"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=835873"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=835873"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}