{"id":758989,"date":"2021-07-07T13:59:33","date_gmt":"2021-07-07T20:59:33","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=758989"},"modified":"2021-07-08T12:02:35","modified_gmt":"2021-07-08T19:02:35","slug":"supervised-deep-hashing-for-efficient-audio-event-retrieval","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/supervised-deep-hashing-for-efficient-audio-event-retrieval\/","title":{"rendered":"Supervised Deep Hashing for Efficient Audio Event Retrieval"},"content":{"rendered":"
Efficient retrieval of audio events can facilitate real-time implementation of numerous query and search-based systems. This work investigates the potency of different hashing techniques for efficient audio event retrieval. Multiple state-of-the-art weak audio embeddings are employed for this purpose. The performance of four classical unsupervised hashing algorithms is explored as part of off-the-shelf analysis. Then, we propose a partially supervised deep hashing framework that transforms the weak embeddings into a low-dimensional space while optimizing for efficient hash codes. The model uses only a fraction of the available labels and is shown here to significantly improve the retrieval accuracy on two widely employed audio event datasets. The extensive analysis and comparison between supervised and unsupervised hashing methods presented here, give insights on the quantizability of audio embeddings. This work provides a first look in efficient audio event retrieval systems and hopes to set baselines for future research.<\/p>\n