{"id":169702,"date":"2004-01-29T16:46:30","date_gmt":"2004-01-29T16:46:30","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/project\/multimodal-conversational-user-interface\/"},"modified":"2019-08-19T09:18:56","modified_gmt":"2019-08-19T16:18:56","slug":"multimodal-conversational-user-interface","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/multimodal-conversational-user-interface\/","title":{"rendered":"Multimodal Conversational User Interface"},"content":{"rendered":"

Researchers in the Speech Technology group at Microsoft are working to allow the computer to travel through our living spaces as a handy electronic HAL pal that answers questions, arrange our calendars, and send messages to our friends and family.<\/p>\n

Most of us use computers to create text, understand numbers, view images, and send messages. There’s only one problem with this marvelous machine. Our computer lives on a desktop, and though we command it with a keyboard and a mouse, it commands us with its immovable size. The office is its domain, and it’s ill at ease where people are most comfortable: snacking in the kitchen, walking around a mall, hanging out at the local pub, and driving in our cars. Researchers in the Speech Technology group at Microsoft are working to allow the computer to travel through our living spaces as a handy electronic HAL pal that answers questions, arrange our calendars, and send messages to our friends and family.<\/p>\n

These systems will use continuous speech recognition and spoken language understanding, the natural communication devices we carry with us all the time. Eventually, this software will power the ultimate communication device: a Pocket PC that doubles as a web browser, e-mail terminal, and cellular telephone. Perfected, it could control an array of other machines in an\u00a0Easy Living environment.\u00a0MiPad<\/a> was the first prototype we built, back in 2000, as a baby step in that direction.<\/p>\n

The distributed nature of many of the components can allow services to follow users wherever they are. Kuansan Wang<\/a>, a researcher on the spoken language understanding (SLU) component of the engine, says, “If you have a docking station in your car, you could dock your device while you are driving. You could make yourself available to all your friends. That’s basically the concept of a global service that follows you around, regardless of whether you have a PC in front of you or not. That’s the grand vision of the whole thing.”<\/p>\n

Wang says that such systems will enable Web services to be invited into a conversation between two users. He describes this scenario. “Let’s say we are talking on Instant Messenger, and we’re talking about where I should take you for lunch. You realize there is a web service out there that is a restaurant guide, and we can probably ask it to join our conversation. Now this web service is not human, still, with our understanding engine we can type text or use speech to talk to this service. So we can ask it to find a restaurant that’s a French restaurant, and not too far away from Microsoft. With a spoken or typed text dialog system we extend the horizon of the project.”<\/p>\n

This understanding engine won’t just enable speech input, it will understand what you’re asking. Wang’s group is trying different approaches that don’t necessarily use conventional computational linguistic approaches. One of his approaches uses limited semantic domains. In the example of the restaurant guide, that particular Web service would be limited to understanding questions about restaurants. Ask it a sports trivia question and it’s likely to give you the address for a sports bar that serves burgers and beer.<\/p>\n

The text-to-speech capability in such conversational systems can enable a PC to read any typed text in a staccato early-episode Star Trek voice. For instance, it will allow you to receive a voice message on your cell phone that your friend has typed on his laptop and sent to you by e-mail.<\/p>\n

Xuedong “X.D.” Huang<\/a>, General Manager of the Speech.NET Group, and the other speech researchers, want to do more than create a useful product. They want to revolutionize computing. “We cannot accept the fact that we cannot solve this hard problem, that we can’t make a machine as good as people’s brains,” Huang says. He believes that the hard problems of speech technology can be solved, and speech recognition can be perfected to become the center of an interface that will allow people to interact more naturally with their computers.<\/p>\n

Speech Technology Home<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

Researchers in the Speech Technology group at Microsoft are working to allow the computer to travel through our living spaces as a handy electronic HAL pal that answers questions, arrange our calendars, and send messages to our friends and family. Most of us use computers to create text, understand numbers, view images, and send messages. […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556,13554,13559],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-169702","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-human-computer-interaction","msr-research-area-social-sciences","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2004-01-29","related-publications":[165315,164932,156707,156699],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[],"msr_research_lab":[],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/169702"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":2,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/169702\/revisions"}],"predecessor-version":[{"id":388112,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/169702\/revisions\/388112"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=169702"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=169702"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=169702"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=169702"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=169702"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}