{"id":850858,"date":"2022-06-08T16:02:55","date_gmt":"2022-06-08T23:02:55","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-video&p=850858"},"modified":"2024-04-03T10:54:59","modified_gmt":"2024-04-03T17:54:59","slug":"detecting-and-mitigating-bias-in-voice-activated-technologies","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/detecting-and-mitigating-bias-in-voice-activated-technologies\/","title":{"rendered":"Detecting and Mitigating Bias in Voice Activated Technologies"},"content":{"rendered":"\n
In this talk, Wiebke (Toussaint) Hutiri shares her work on FairEVA, an open-source project to gain insights and build tools for mitigating bias in voice biometrics. The project is supported by a Mozilla Technology Award for advancing trustworthy AI and motivated by research that Wiebke will present at ACM FAccT 2022 in Korea. Speaker verification is a voice-based biometric that identifies speakers from their voice. It is used in voice assistants, call centers, and forensics. There are many parallels between the history and technologies of face recognition and speaker verification. However, while bias and discrimination are recognised as significant challenges in face recognition, little attention has been paid to them in speaker verification. This is a problem, as speaker verification systems are increasingly used in sensitive domains (e.g., financial and e-wallet services such as Safaricom where \u2018my voice is my password\u2019, pensioner proof-of-life verification, health and elderly care). In addition to sharing insights on identifying bias in speaker verification, Wiebke will also discuss the project\u2019s advances on design guidelines for inclusive speaker verification evaluation datasets, their open-source library for evaluating bias and a technology audit that they are designing.<\/p>\n\n\n\n