Mission Statement
Exploit multi-sensory information to improve user experience in
• Speech-centric human computer interaction
• Computer-mediated human inter-communication
Goals
- Understand end-users’ requirements
- Identify sensor(s) requirement
- Prototype new hardware
- Develop robust technologies
Publications
- Air-and-Bone Conductive Integrated Microphones for Robust Speech Detection and Enhancement in the IEEE Automatic Speech Recognition and Understanding Workshop (ASRU03), November 30 – December 4, 2003.
- Multi-Sensory Microphones for Robust Speech Detection, Enhancement and Recognition in the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2004), May 17-21, 2004.
- Direct Filtering for Air-and-Bone Conductive Microphones in the IEEE International Workshop on Multimedia Signal Processing (MMSP’04), Sep.29-Oct.1, 2004.
- Nonlinear Information Fusion in Multi-Sensor Processing Extracting and Exploiting Hidden Dynamics of Speech Captured by a Bone-Conductive Microphone in the IEEE International Workshop on Multimedia Signal Processing (MMSP’04), Sep.29-Oct.1, 2004.
Sample Waveforms (pending migration)
Original waveforms:
- Air channel
- Bone channel
Processed waveforms:
- Spectral subtraction
- Direct filtering
- Minimum mean square error (MMSE) estimation with a single Gaussian
- Minimum mean square error (MMSE) estimation with a mixture of four Gaussians