Audio-based Toxic Language Detection
Online gaming has been growing increasingly popular recently. This highly competitive online social platform can sometimes lead to undesired behavior and create an unfriendly community for many players. The detecting of profanity and bullying have been previously explored in text-based platforms (ex. social media, Twitter/Facebook) but when it comes to speech-based applications including online gaming, the field is relatively unexplored. In this project we focus on audio-based toxic language detection, which can be a great asset in scenarios where text transcriptions are not readily available. Additionally, audio-based queues such as speech tone or pitch variation could potentially provide supplementary or orthogonal information to transcribed content and word-based features. We have developed a Self-Attentive Convolutional Neural Network architecture to carry out the detection of toxic segments in naturalist audio recordings, challenged by diverse noise types such as background noise and music, overlapping speech, different microphone types, speech accents, and even languages. In order to tackle these challenges, the self-attention mechanism is used to attend to toxic frames while processing each utterance. We have evaluated our proposed system on a large internal dataset, as well as on publicly available data of a related domain. Our findings and results suggest promising directions toward automated toxic language detection for online gaming scenarios.
Speaker Bios
Midia Yousefi is currently a final year Ph.D. candidate under the supervision of Dr. John Hansen at University of Texas at Dallas. Her Ph.D. thesis is focused on “Overlapping speech detection, separation, and recognition using DNN-based methods”. She received her B. Eng. and M. Sci. degrees in Electrical Engineering in 2014 and 2016 from ShahidBehesti University, Tehran, Iran. Her research interests include machine learning, speech enhancement, speech separation, and automatic speech recognition.
- Séries:
- Microsoft Research Talks
- Date:
- Haut-parleurs:
- Midia Yousefi
- Affiliation:
- University of Texas at Dallas
Taille: Microsoft Research Talks
-
Decoding the Human Brain – A Neurosurgeon’s Experience
Speakers:- Pascal Zinn,
- Ivan Tashev
-
-
-
-
Galea: The Bridge Between Mixed Reality and Neurotechnology
Speakers:- Eva Esteban,
- Conor Russomanno
-
Current and Future Application of BCIs
Speakers:- Christoph Guger
-
Challenges in Evolving a Successful Database Product (SQL Server) to a Cloud Service (SQL Azure)
Speakers:- Hanuma Kodavalla,
- Phil Bernstein
-
Improving text prediction accuracy using neurophysiology
Speakers:- Sophia Mehdizadeh
-
-
DIABLo: a Deep Individual-Agnostic Binaural Localizer
Speakers:- Shoken Kaneko
-
-
Recent Efforts Towards Efficient And Scalable Neural Waveform Coding
Speakers:- Kai Zhen
-
-
Audio-based Toxic Language Detection
Speakers:- Midia Yousefi
-
-
From SqueezeNet to SqueezeBERT: Developing Efficient Deep Neural Networks
Speakers:- Sujeeth Bharadwaj
-
Hope Speech and Help Speech: Surfacing Positivity Amidst Hate
Speakers:- Monojit Choudhury
-
-
-
-
-
'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project
Speakers:- Peter Clark
-
Checkpointing the Un-checkpointable: the Split-Process Approach for MPI and Formal Verification
Speakers:- Gene Cooperman
-
Learning Structured Models for Safe Robot Control
Speakers:- Ashish Kapoor
-