-
Ali Vosoughi, University of Rochester, New York, USA. Audio-Video Learning from Unlabeled Data by Leveraging Multimodal LLMs.
Benjamin Stahl, University of Music and Performing Arts Graz, Austria. Distilling Self-Supervised-Learning-Based Speech Quality Assessment into Compact Models.
Elisabeth Heremans, KU Leuven, Belgium. Shining light on the learning brain: Estimating mental workload in a simulated flight task using optical f-NIRS signals.
Gene-Ping Yang, University of Edinburgh, UK. Distributed asynchronous device speech enhancement using microphone permutation and number invariant windowed cross attention.
Haibin Wu, National Taiwan University, Taiwan. Towards ultra-low latency speech enhancement – A comprehensive study.
Jinhua Liang, Queen Mary University of London, UK. Audio-Visual Representation Learning and Generation in the Latent Space.
Shivam Mehta. KTH Royal Institute of Technology, Stockholm, Sweden. Make some noise: Teaching the language of audio to an LLM using sound tokens.
-
Ard Kastrati, ETH Zurich, Switzerland. Decoding Neurophysiological Responses for Improving Predictive Text Systems using Brain-Computer Interfaces.
Azalea Gui, University of Toronto, Canada. Improving Frechet Audio Distance for Generative Music Evaluation.
Eloi Moliner Juanpere, Aalto University in Espoo, Finland. Unsupervised Speech Reverberation Control with Diffusion Implicit Bridges.
Michele Mancusi, Sapienza – University of Rome, Italy. Unsupervised Speech Separation Using Adversarial Loss and Additional Separation Losses.
Ruihan Yang, University of California – Irvine, USA. Synchronized Audio-Visual Generation with a Joint Generative Diffusion Model and Contrastive Loss (opens in new tab).
Tanmay Srivastava, Stony Brook University, USA. Private and Accessible Speech Commands in Head-Worn Devices.
Yuanchao Li, University of Edinburgh, UK. A Comparative Study of Audio Encoders for Emotion in Real and Synthesized Music: Advancing Realistic Emotion Generation.
-
Haleh Akrami, University of Southern California, CA, USA. Semi-supervised multi-task learning for acoustic parameter estimation (opens in new tab).
Jeremy Hyrkas, University of California, CA, USA. Binaural spatial audio positioning in video calls (opens in new tab).
Julian Neri, McGill University, Montreal, Canada. Real-Time Single-Channel Speech Separation in Noisy and Reverberant Environments (opens in new tab).
Justin Kilmarx, The University of Texas at Austin, USA. Mapping the neural representational similarity of multiple object categories during visual imagery.
Khandokar Md. Nayem, Indiana University, Bloomington, IN, USA. Unified Speech Enhancement Approach for Speech Degradations and Noise Suppression (opens in new tab).
Sandeep Reddy Kothinti, The Johns Hopkins University, USA. Automated Audio Captioning: Methods and Metrics for Natural Language Description of Sounds.
Sophia Mehdizadeh, Georgia Tech, USA. Improving text prediction accuracy using neurophysiology (opens in new tab).
Tan Gemicioglu, Georgia Tech, USA. Tongue Gesture Recognition in Head Mounted Displays. (opens in new tab)
-
Wei-Cheng Lin, University of Texas at Dallas, USA. Toxic Speech and Speech Emotions: Investigations of Audio-based Modeling Methodology and Intercorrelations.
Shoken Kaneko, University of Maryland, USA. DIABLo: a Deep Individual-Agnostic Binaural Localizer.
Justin Kilmarx, University of Texas at Austin, USA. Developing a Brain-Computer Interface Based on Visual Imagery.
Viet Anh Trinh, City University of New York (CUNY), USA. Unsupervised Speech Enhancement.
Abu-Zaher Faridee, University of Maryland, USA. Non-Intrusive Multi-Task Speech Quality Assessment.
-
Ali Aroudi, University of Oldenburg, Germany. Geometry-constrained Beamforming Network for end-to-end Farfield Sound Source Separation.
Kuan-Jung Chiang, University of California – San Diego, USA. A Closed-loop Adaptive Brain-computer Interface framework.
Midia Yousefi, University of Texas, Dallas, USA. Audio-based Toxic Language Detection.
Shoken Kaneko, University of Maryland, College Park, USA. Forest Sound Scene Simulation and Bird Localization with Distributed Microphone Arrays (opens in new tab).
Wenkang An, Carnegie Mellon University, USA. Decoding Music Attention from “EEG headphones”: a User-friendly Auditory Brain-computer Interface (opens in new tab).
-
Arindam Jati, University of Southern California (USC), Los Angeles, USA. Supervised Deep Hashing for Efficient Audio Retrieval.
Benjamin Martinez Elizalde, Carnegie Mellon University, USA. Sound event recognition for video-content analysis.
Fabian Brinkmann, Technical University of Berlin, Germany. Efficient and Perceptually Plausible 3-D Sound for Virtual Reality.
Hakim Si Mohammed, INRIA Rennes, France. Improving the Ergonomics and User-Friendliness of SSVEP-based BCIs in Virtual Reality.
Md Tamzeed Islam, University of North Carolina at Chapel Hill, USA. Anthropometric Feature Estimation using Sensors on Headphone for HRTF Personalization.
Morayo Ogunsina, Penn State Erie, USA. Hearing AI App for Sound-Based User Surrounding Awareness.
Nicholas Huang, Johns Hopkins University, USA. Decoding Auditory Attention Via the Auditory Steady-State Response for Use in A Brain-Computer Interface.
Sahar Hashemgeloogerdi, University of Rochester, USA. Integrating Beamforming and Multichannel Linear Prediction for Dereverberation and Denoising.
Wenkang An, Carnegie Mellon University, USA. Decoding Multisensory Attention from Electroencephalography for Use in a Brain-Computer Interface.
Yangyang (Raymond) Xia, Carnegie Mellon University, USA. Real-time Single-channel Speech Enhancement with Recurrent Neural Networks.
-
Anderson Avila, Institut National de la Recherche Scientifique (INRS-EMT), Canada. Deep Neural Network Models for Audio Quality Assessment.
Andrea Genovese, New York University Steinhardt, USA. Blind Room Parameter Estimation in Real Time from Single-Channel Audio Signals in Noisy Conditions.
Benjamin Martinez Elizalde, Carnegie Mellon University, USA. A Cross-modal Audio Search Engine based on Joint Audio-Text Embeddings.
Chen Song, University at Buffalo, the State University of New York, USA. Sensor Fusion for Learning-based Motion Estimation in VR.
Christoph F. Hold, Technische Universität Berlin, Germany. Improvements on Higher Order Ambisonics Reproduction in the Spherical Harmonics Domain Under Real-time Constraints.
Harishchandra Dubey, University of Texas at Dallas, USA. MSR-Freesound: Advancing Audio Event Detection & Classification through Efficient Deep Learning Approaches.
Sebastian Braun, Friedrich-Alexander University of Erlangen Nuremberg (FAU), Germany. Speech Enhancement Using Linear and Non-linear Spatial Filtering for Head-mounted Displays.
-
Etienne Thuillier, Aalto University, Finland. Spatial Audio Feature Discovery Using a Neural Network Classifier.
Xuesu Xiao, Texas A&M University, USA. Articulated Human Pose Tracking with Inertial Sensors.
Srinivas Parthasarathy, University of Texas at Dallas, USA. Speech Emotion Recognition with Convolutional Neural Networks.
Han Zhao, Carnegie Mellon University, USA. High-Accuracy Neural-Network Models for Speech Enhancement.
Jong Hwan Ko, Georgia Institute of Technology, USA. Efficient Neural-Network Design for Real-Time Speech Enhancement.
Rasool Fakoor, University of Texas at Arlington, USA. Speech Enhancement With and Without Gradient Descent.
Yan-hui Tu, University of Science and Technology of China, P. R. China. Regression Based Speech Enhancement with Neural Networks.
-
Amit Das, University of Illinois at Urbana-Champaign, USA. Ultrasound Based Gesture Recognition.
Vani Rajendran, University of Oxford, UK. Simple Effects that Enhance the Elevation Perception in Spatial Sound.
Zhong-Qiu Wang, Ohio State University. Emotion, gender, and age recognition from speech utterances using neural networks.
-
Archontis Politis, Aalto University, Finland. Applications of 3-Dimensional Spherical Transforms to Acoustics and Personalization of Head-related Transfer Functions (HRTFs).
Supreeth Krishna Rao, Worcester Polytechnic Institute, USA. Ultrasound Doppler Radar.
Seyedmahdad Mirsamadi, University of Texas at Dallas, USA. DNN-based Online Speech Enhancement Using Multitask Learning and Suppression Rule Estimation.
Long Le, University of Illinois at Urbana-Champaign, USA. Spatial Probability for Sound Source Localization.
-
Jinkyu Lee, Yonsei University, Korea. Emotion Detection from Speech Signals.
Felicia Lim, Imperial College London, UK. Blind Estimation of Reverberation Parameters.
-
Ivan Dokmanic, EPFL, Switzerland. Ultrasound Depth Imaging.
Piotr Bilinski, INRIA, France. HRTF Personalization Using Anthropometric Features.
Kun Han, Ohio State University, USA. Emotion Detection from Speech Signals.
-
Keith Godin, University of Texas at Dallas, USA. Open-set Speaker Identification on Noisy, Short Utterances.
Jason Wung, Georgia Tech, USA. Next Steps in Multi-Channel Acoustic Echo reduction for Xbox Kinect.
Xing Li, University of Washington, USA. Dynamic Loudness Control for In-Car Audio.
-
Keith Godin, University of Texas at Dallas, USA. Binaural Sound Source Localization.
-
Hoang Do, Brown University, USA. A Step Towards NUI: Speaker Verification for Gaming Scenarios.