Opens in a new tab<\/span><\/p>\nWorkshop Location:<\/b> Microsoft Building 99, 14820 NE 36th St, Redmond, WA 98052<\/p>\n
Workshop Hotel:<\/b> Courtyard by Marriott (Bellevue\/Redmond), 14615 NE 29th Place Bellevue, WA 98007<\/p>\n
Closest Airport:<\/b> Seattle-Tacoma International Airport (abbreviated Sea-Tac or SEA)<\/p>\n
Dinner Location on Feb. 25:<\/b> Bai Tong Restaurant, 14804 NE 24th St, Redmond, WA 98052<\/p>\n
Note that Microsoft has chartered a bus to bring guests staying at the hotel to and from the workshop. The bus will be outside the hotel at 8 a.m. each morning. Bus transport from the workshop to dinner and from dinner back to the hotel is also provided. Note that the hotel is also a short (10 minute) walk from the workshop location, for guests who would prefer to walk.<\/b>Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"A 2-day academic workshop, to discuss the state-of-the-art, imminent challenges, and possible solutions in the area of sign language recognition and translation.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2019-02-25","msr_enddate":"2019-02-26","msr_location":"Microsoft Research Redmond, Building 99, room 1919 (1927 and 1915 available for breakouts if needed)","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"9:00 AM \u2013 8:00 PM, 9:00 AM \u2013 3:00 PM","msr_hide_region":false,"msr_private_event":true,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13556,13562,13551,13545,13554],"msr-region":[197900],"msr-event-type":[197944],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-559521","msr-event","type-msr-event","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-graphics-and-multimedia","msr-research-area-human-language-technologies","msr-research-area-human-computer-interaction","msr-region-north-america","msr-event-type-hosted-by-microsoft","msr-locale-en_us"],"msr_about":"\n\n
Organizers:<\/strong>
\nDanielle Bragg, Postdoctoral Researcher
\nMeredith Ringel Morris, Principal Researcher
\nMary Bellard, Senior Accessibility Architect<\/p>\nSummary:
\n<\/b>Developing successful sign language recognition and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, linguistics, and Deaf culture. In an effort to support people who are Deaf and Hard-of-Hearing to communicate in the language of their choice in more scenarios, we want to better understand advancements in sign language recognition and translation, and explore what is possible in this area of technology.<\/p>\n
Microsoft is bringing together a diverse group of experts with relevant skills for a 2-day workshop, to discuss the state-of-the-art, imminent challenges, and possible solutions in the area of sign language recognition and translation. A main goal for the workshop is to produce a map of the current landscape and a set of challenges for the community to tackle next.Opens in a new tab<\/span><\/p>\nDate:<\/strong> Monday, February 25, 2019 \u2013 Tuesday, February 26, 2019<\/p>\nTime:<\/strong> 9:00 AM \u2013 8:00 PM, 9:00 AM \u2013 3:00 PM<\/p>\nLocation:<\/strong> @ Microsoft Research Redmond, Building 99, room 1919\u00a0\u00a0 (1927 and 1915 available for breakouts if needed)<\/p>\nMonday, February 25, 2019<\/strong><\/p>\n\n<tbody<\/p>\n\n| 08:00 AM <\/span><\/td>\n | Bus leaves from hotel for workshop location<\/td>\n | <\/span><\/td>\n<\/tr>\n\n| 08:30 AM\u00a0<\/span><\/td>\n | Breakfast Available, Check-in<\/span><\/td>\n | \u00a0<\/span><\/td>\n<\/tr>\n\n| 09:00 AM\u00a0<\/span><\/td>\n | Welcome<\/td>\n | Mary Bellard\u00a0<\/span><\/td>\n<\/tr>\n\n| 09:30 AM\u00a0<\/span><\/td>\n | Ice Breaker<\/p>\n <\/td>\n | \u00a0<\/span><\/td>\n<\/tr>\n\n| 10:30 AM\u00a0<\/span><\/td>\n | An Introduction to Deaf Culture<\/strong>\u00a0<\/span><\/p>\n Abstract: This introduction will briefly cover all elements of Deaf culture, language, accessibility, and cultural competence.\u00a0 With so many products and technology developed out there by non-Deaf people, the Deaf community is struggling with a “disability designation” when they face barriers that could have been avoided if the developers took Deaf people in consideration for universal design to keep our world accessible to all people and empowering to Deaf people themselves.<\/td>\n Lance Forshay<\/td>\n<\/tr>\n | \n| 11:15 AM\u00a0<\/span><\/td>\n | Designing Technology for Sign Languages and their Communities\u00a0<\/strong><\/p>\n Abstract: Language is a technology of the human body. Languages emerge and evolve to most efficiently communicate what their users need, can use and will use. Originating in deaf communities around the world, sign languages exploit the capacities of the visual-manual modality and the grammatical use of body-space configurations for communicative function. Users of sign language, however, are a diverse group of individuals, especially with respect to hearing status and linguistic competence and experience. This diversity impacts how sign languages are used in the practice of everyday life, which one needs to account for in designing sign language tools. This talk covers the key elements of sign language structure that are relevant for designing language technologies involving sign language and their users.\u00a0\u00a0<\/span><\/td>\n Deniz Ilkbasaran\u00a0<\/span><\/td>\n<\/tr>\n\n| 12:00 PM\u00a0<\/span><\/td>\n | Lunch\u00a0<\/span><\/td>\n | \u00a0<\/span><\/td>\n<\/tr>\n\n| 01:00 PM\u00a0<\/span><\/td>\n | Putting Words into Computers<\/strong>\u00a0<\/span><\/p>\n Abstract: The field of natural language processing (NLP) is now delivering powerful technologies: computers can translate from one language to another, answer questions, and hold conversations.\u00a0 In this relatively non-technical talk, I will trace the evolution of one aspect of NLP programs: how we put English (or other-language) words into computers.\u00a0 While this talk won’t teach you everything about NLP, it will illuminate some of its toughest challenges and most exciting recent advances.\u00a0<\/span><\/td>\n Noah Smith\u00a0<\/span><\/td>\n<\/tr>\n\n| 01:45 PM<\/td>\n | Computer Vision Meets Speech Recognition: Challenges and Recent Development of Sign Language Recognition<\/strong><\/p>\n Abstract: This talk will present recent advances in the field of sign language recognition observed from an interdisciplinary view point at the intersection of speech recognition and computer vision. We will show several examples, analyze available data sets and trending methods. We will also understand what challenges remain to be tackled.<\/td>\n Oscar Koller<\/td>\n<\/tr>\n | \n| 02:30 PM\u00a0<\/span><\/td>\n | Break\u00a0<\/span><\/td>\n | \u00a0<\/span><\/td>\n<\/tr>\n\n| 03:00 PM\u00a0<\/span><\/td>\n | Learning from Human Movements to Create Accurate Sign Language Animations<\/strong> | | | | | | | | | | | | |