AI<\/a>. And the AI part is super important. Without it, the research would look a lot different, I think. So, tell us how AI is changing the game for innovation in your field in ability, enabling and Human Computer Interaction.<\/strong><\/p>\nMeredith Ringel Morris: Absolutely. I think there are a lot of AI advances that are going to enable new experiences for people with disabilities. But I think it\u2019s also vitally important that the needs of that user population be considered by people involved in the AI research directly. There\u2019s been an explosion recently, thanks to advances in deep learning, in the ability for computer vision systems to label photographs. And that\u2019s wonderful. And, of course, to me, an obvious application for a system that can automatically label photographs is to caption images for people who are visually impaired. Because when browsing the web right now, a screen reader can only describe an image if the author of a webpage has supplied an alt-text for it.<\/p>\n
Host: Description.<\/strong><\/p>\nMeredith Ringel Morris: That\u2019s right. And about 50 percent of images on the web right now, on major websites, lack alt-texts completely. So, a blind user wouldn\u2019t receive any description. So one might think well, we\u2019ll use these new AI technologies to caption the images. But there are some challenges with that. So, most of these technologies are not developed right now with the scenario of use by people who are visually impaired. So, there\u2019s the assumption that if a mistake is made in labeling the image, the cost of that mistake is relatively low, because a sighted user can see that a particular image maybe doesn\u2019t match the retrieval terms and just ignore that mistake. But for someone who is visually impaired, the cost can actually be quite high. And that, I think, is a more fundamental problem in AI research that I think our perspective from HCI and working with end-users can really inform through collaboration.<\/p>\n
Host: Right. Let\u2019s talk about collaboration and circle back to the inception of the Enable group. And it began as a partnership with Team Gleason. I\u2019m not sure everyone knows who Steve Gleason is and what the collaboration started as and how it\u2019s advanced and grown.<\/strong><\/p>\nMeredith Ringel Morris: Sure, so the Enable team, which is led by Rico Malvar, who is the Chief Scientist of Microsoft Research, they grew out of a partnership with Team Gleason as part of Microsoft\u2019s one-week Hackathon. And so, Steve Gleason is a very well-known football player and he was diagnosed at a very young age with ALS, which is also known as Lou Gehrig\u2019s Disease. And as people with ALS progress and they lose muscle control, they need to rely on other tools such as wheelchairs for mobility and AAC devices for communication and speech generation. And Steve was not satisfied with the state of current technologies in that area. Unfortunately, there\u2019s no medical cure right now for ALS, and Steve is very well-known for a quote that I\u2019ll try to reproduce accurately here. So, his approximate quote is that, \u201cWhile there is no medical cure for ALS, technology can be the cure.\u201d And so, by that, he means that technology can really, in this social model of disability, technology plays an important role in removing barriers to access and making people less disabled in their daily interactions. And so, his team has worked closely with Microsoft on two particular projects, which were the initial projects of the Enable team. One is allowing someone to use eye gaze to drive their own wheelchair, to have more autonomy. And then the other is in improved interfaces for typing with eye gaze. So, typically, the way people with ALS and other serious motor conditions communicate is through using the eyes to type. So, you would stare, with your eyes, at a particular letter on the screen with an eye tracker for a set amount of time until the system\u2019s confident that you\u2019re looking at that letter. And then that letter would be typed, and you do that one at a time. And then, when you\u2019re done typing something, you would hit a button that would speak that out in a computer-generated voice. And, typically, people achieve typing rates that are very slow, about 5 to 10 words a minute. Whereas conversational English speech is closer to 190 words a minute. So that is a huge impediment to participation in daily life when you\u2019re communicating that slowly. So, just letting people type, letter by letter, is not going to get you up to regular rates of speech. You need prediction. And that\u2019s where AI comes in again. And so, the Enable team has been working on better user interfaces and better algorithms behind the scenes, and word prediction in order to improve that kind of speech. And of course, it\u2019s not just the words. So, another thing we\u2019ve been thinking about is, again, how you let people express themselves more quickly and more richly. And so, like they say, a picture is worth 1,000 words. We\u2019ve looked at adding a row of keys to the communication devices that are a different emoji. So, representing the most common human emotions, anger, surprise, sadness, happiness. And if you just add one of these emoji to your sentence as a punctuation, so it\u2019s just one more key press, so low input from the user, since each input is such a great effort. And we use those to not only modify the prosody of the output behind the scenes, but we insert clips of non-speech audio. So, for example, if you add a surprised emoji to a sentence, you might get \u201cHhhhhhaa!\u201d inserted, and if you add the angry emoji, you might get \u201cAhhhrrr!\u201d And these communicate such a great amount of emotion and nuance for such a low effort, a single key press. And that\u2019s been really well-received by users in some of our research testing.<\/p>\n
(music plays)<\/strong><\/p>\nHost: You talked about what you do with universities and so on, and I know you\u2019re affiliated with the University of Washington, and you\u2019re involved in a program called DUB. Can you tell us what that is?<\/strong><\/p>\nMeredith Ringel Morris: Sure.<\/p>\n
Host: And what do you do?<\/strong><\/p>\nMeredith Ringel Morris: DUB is a little bit of a play on words, because of course, University of Washington is abbreviated as UDub. But the DUB Research Group, it\u2019s spelled D-U-B. And it stands for Design, Use, Build. And so, it\u2019s an interdisciplinary consortium of faculty from several different departments, so Computer Science, the School of Information, Human Centered Design and Engineering, Arts and Design, that are all interested in creating technology that is more human-centric for end-users. Myself, and several other researchers from Microsoft Research, are actively involved in collaborations with DUB.<\/p>\n
Host: So how did you end up here? Give us a little bit about your background and what brought you to Microsoft Research?<\/strong><\/p>\nMeredith Ringel Morris: Sure, so my background is that I studied computer science as an undergraduate and I went to Brown University, which at the time, had a very traditional computer science department, so there was no HCI. And I learned about HCI by browsing the web. I found the website of the Stanford Interactive Workspaces Project, which was an early project around ubiquitous computing, so thinking about how we can design spaces where computing is embedded in the environment. And I remember that webpage had a picture of a room that at the time looked very futuristic to me, where there were these large, wall-sized displays, and a table with a display embedded in it and it was really a very futuristic-looking. So I contacted the professor in charge of that project at Stanford, who is Terry Winograd, to ask how I could get involved in this kind of work, and I ended up doing some volunteer-based research projects with him, and I eventually went to graduate school in the computer science department there and that was kind of my route into learning more about this field of HCI. And while I was at Stanford, I was fortunate to get an internship with Microsoft Research, where I worked with Eric Horvitz and Susan Dumais on thinking about how desktop search, which was very new at the time, how you could allow people to use context that they might remember to assist in search. So instead of only searching for something by the file name, you could specify other things that you might remember, like, \u201cOh, I remember it was that PowerPoint document that I wrote the day after the presidential election.\u201d Or, \u201cOh, it was that email someone sent me right after my son\u2019s birthday party.\u201d And that you could use these anchors that were more meaningful and memorable to people than file names as a way in to search for information. And so that was a really exciting project that kind of expanded again my knowledge and got me into thinking about information retrieval. But I also learned about the culture of Microsoft Research and then I was excited to come and work here when I graduated.<\/p>\n
Host: Given the insights that you get from the work you\u2019re doing \u2013 which is both powerfully beneficial for good, but could also have some downsides \u2013 is there anything that sort of keeps you up at night?<\/strong><\/p>\nMeredith Ringel Morris: I think one is in thinking about AI systems that are more inspectable and understandable. Not only inspectable by AI researchers, which is, in and of itself, still a challenge, but inspectable by end users so that they can really understand when to place their trust in a system and what a system is doing. I think there are challenges in developing AI systems that balance augmenting users\u2019 capabilities with the privacy needs of the users themselves and people in the surrounding environment. So, let\u2019s say one might imagine, hypothetically, that someone\u2019s who is visually impaired might benefit from having an outward-facing smart camera that\u2019s a \u201cwearable\u201d that could sense things in the environment using computer vision and describe that to someone. So, that could potentially have a great benefit to the end user but might have a privacy cost to people in the surrounding environment who may not be actively consenting to being captured. And I think thinking carefully about those kinds of ethical challenges is actually a really important part of AI research. And I know Microsoft has now formed a group called AETHER, A-E-T-H-E-R, which is thinking specifically about AI and ethics. The A is for AI, the E is for ethics, and I think that\u2019s one of the areas that will be really interesting to tackle.<\/p>\n
Host: Microsoft just announced a new initiative at the annual Build Conference that\u2019s pretty exciting. Tell us what it is, who it will impact, and what it tells us about Microsoft\u2019s broader mission for technology in the 21st century.<\/strong><\/p>\nMeredith Ringel Morris: Yes, so you\u2019re referring to the AI for Accessibility Initiative. So, this is an exciting new program that is designed to help support grass roots innovation in the accessibility space. So, Microsoft is really interested in encouraging students, entrepreneurs, et cetera, to think about how you can use Microsoft technology to enable important scenarios for people with disabilities. So, for example, how can we use these technologies to allow people to be more productive in their work life or participate more fully in social life outside of work? And so, the AI for Accessibility initiative offers funding opportunities that people can apply, so you describe how your project or app would fit into this vision, and how Microsoft can help support your success in this space, perhaps by donating compute time on our Azure servers, or allowing free API calls to our cognitive services APIs, which offer shortcuts to some of our advances in AI technology, like libraries for computer vision and for natural language processing. So, we\u2019re really excited about this initiative. We\u2019re excited to see what kinds of great project proposals and applications come in. And I think it accentuates Microsoft\u2019s commitment to its mission statement of empowering all users in their lives, and I think empowering all users really refers to all users, including the 1 billion people worldwide with disabilities. So, it\u2019s great to see that being emphasized in this new initiative.<\/p>\n
(music plays)<\/strong><\/p>\nHost: Merrie Ringel Morris, I so enjoyed our conversation today. My eyes were opened, shall we say? Thank you for coming in and sharing the work that you\u2019re doing and the passion behind it.<\/strong><\/p>\nMeredith Ringel Morris: Great, thank you.<\/p>\n
Host: To learn more about Dr. Merrie Ringel Morris and how AI is helping people with disabilities all over the world, visit Microsoft.com\/research<\/a><\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"Episode 25, May 23, 2018 – Dr. Morris gives us some fascinating insights into the world of \u201cability,\u201d talks about how technology is augmenting not only sensory and motor abilities, but cognitive and social abilities as well, and shares how Microsoft, through its AI for Accessibility initiative, is committed to extending the capabilities and enhancing the quality of life for every person on the planet.<\/p>\n","protected":false},"author":37074,"featured_media":486884,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"https:\/\/player.blubrry.com\/id\/34166345","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[240054],"tags":[],"research-area":[13556,13554],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-486860","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-msr-podcast","msr-research-area-artificial-intelligence","msr-research-area-human-computer-interaction","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"https:\/\/player.blubrry.com\/id\/34166345","podcast_episode":"","msr_research_lab":[199565],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[144928,283244,371909],"related-projects":[501950],"related-events":[],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"","byline":"","formattedDate":"May 23, 2018","formattedExcerpt":"Episode 25, May 23, 2018 - Dr. Morris gives us some fascinating insights into the world of \u201cability,\u201d talks about how technology is augmenting not only sensory and motor abilities, but cognitive and social abilities as well, and shares how Microsoft, through its AI for…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/486860"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37074"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=486860"}],"version-history":[{"count":13,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/486860\/revisions"}],"predecessor-version":[{"id":487577,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/486860\/revisions\/487577"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/486884"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=486860"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=486860"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=486860"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=486860"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=486860"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=486860"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=486860"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=486860"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=486860"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=486860"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=486860"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}