{"id":306383,"date":"2009-11-04T19:00:13","date_gmt":"2009-11-05T03:00:13","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=306383"},"modified":"2016-10-16T18:42:53","modified_gmt":"2016-10-17T01:42:53","slug":"making-car-infotainment-simple-natural","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/making-car-infotainment-simple-natural\/","title":{"rendered":"Making Car Infotainment Simple, Natural"},"content":{"rendered":"

By Rob Knies, Managing Editor, Microsoft Research<\/em><\/p>\n

You\u2019re steering with your left hand while your right is punching car-stereo buttons in eager search of that amazing new Lady Gaga song. Your mobile phone rings, and as you adjust your headset\u2014hands-free, naturally\u2014the driver in front of you slams on his brakes \u2026<\/p>\n

Sound familiar? For drivers, such a scenario is almost commonplace. These days, the automobile is tricked out with all sorts of conveniences, designed to make driving a comfortable, media-rich experience. But there is a cognitive price to pay in operating these devices while keeping sufficient concentration on the road.<\/p>\n

Does it have to be that way, though? Researchers from Microsoft Research Redmond<\/a> aim to find out.<\/p>\n

\"Commute

Ivan Tashev (left), Yun-Cheng Ju, and Mike Seltzer (at wheel) demonstrate their Commute UX driving simulator.<\/p><\/div>\n

Ivan Tashev<\/a>, Mike Seltzer<\/a>, and Yun-Cheng Ju<\/a>, members of the Speech Technology<\/a> group, are leading a research project called Commute UX, an interactive dialog system for in-car infotainment that makes finding a person to call or a song to play easy and efficient, using natural language input and a multimodal user interface.<\/p>\n

\u201cPeople are in their cars more and more,\u201d Seltzer says, \u201cand they\u2019re trying to do more and more while they\u2019re driving. We\u2019re trying to figure out how we can enable people to do at least some of the things they would like to do in a way that is safer and more natural.<\/p>\n

\u201cThose things are correlated. If you could just speak to the system as you would to a passenger, you wouldn\u2019t need to remember hundreds of commands and all the rules of how to use the system. You could keep your brainpower focused on the driving, keep your eyes on the road and your hands on the wheel, and, hopefully, you\u2019d be safer on the road.\u201d<\/p>\n

Alex Acero, research-area manager of the Speech Technology group, has a unique vantage point to assess the value of the Commute UX project.<\/p>\n

\u201cI\u2019ve been working on speech recognition for 25 years,\u201d he says, \u201cand have seen researchers get excited about speech in one application, only to see that the technology doesn\u2019t take off because users find other alternatives to accomplish their task. But for the car, we have not found an obvious safe alternative to speech, so I\u2019m very excited about the role of speech technology in automobiles\u2015and Commute UX, in particular.\u201d<\/p>\n

The hardware involved in the Commute UX project is simple and would not be unfamiliar to drivers of late-model cars already on the road: microphones, a touch screen, a cluster of buttons on the steering wheel. Simplicity is the key to minimizing driver distraction, the Commute UX researchers say, and that maxim is reflected in the project\u2019s guiding principles, which are focused on improving user satisfaction with such in-car systems:<\/p>\n