{"id":559521,"date":"2019-02-05T18:10:09","date_gmt":"2019-02-06T02:10:09","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&p=559521"},"modified":"2019-02-19T18:18:28","modified_gmt":"2019-02-20T02:18:28","slug":"microsoft-ai-for-accessibility-sign-language-recognition-translation-workshop","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/microsoft-ai-for-accessibility-sign-language-recognition-translation-workshop\/","title":{"rendered":"Microsoft AI for Accessibility Sign Language Recognition & Translation Workshop"},"content":{"rendered":"

Organizers:<\/strong>
\nDanielle Bragg, Postdoctoral Researcher
\nMeredith Ringel Morris, Principal Researcher
\nMary Bellard, Senior Accessibility Architect<\/p>\n

Summary:
\n<\/b>Developing successful sign language recognition and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, linguistics, and Deaf culture. In an effort to support people who are Deaf and Hard-of-Hearing to communicate in the language of their choice in more scenarios, we want to better understand advancements in sign language recognition and translation, and explore what is possible in this area of technology.<\/p>\n

Microsoft is bringing together a diverse group of experts with relevant skills for a 2-day workshop, to discuss the state-of-the-art, imminent challenges, and possible solutions in the area of sign language recognition and translation. A main goal for the workshop is to produce a map of the current landscape and a set of challenges for the community to tackle next.<\/p>\n","protected":false},"excerpt":{"rendered":"

A 2-day academic workshop, to discuss the state-of-the-art, imminent challenges, and possible solutions in the area of sign language recognition and translation.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr_startdate":"2019-02-25","msr_enddate":"2019-02-26","msr_location":"Microsoft Research Redmond, Building 99, room 1919 (1927 and 1915 available for breakouts if needed)","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"9:00 AM \u2013 8:00 PM, 9:00 AM \u2013 3:00 PM","msr_hide_region":false,"msr_private_event":true,"footnotes":""},"research-area":[13556,13562,13551,13545,13554],"msr-region":[197900],"msr-event-type":[197944],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-559521","msr-event","type-msr-event","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-graphics-and-multimedia","msr-research-area-human-language-technologies","msr-research-area-human-computer-interaction","msr-region-north-america","msr-event-type-hosted-by-microsoft","msr-locale-en_us"],"msr_about":"Organizers:<\/strong>\r\nDanielle Bragg, Postdoctoral Researcher\r\nMeredith Ringel Morris, Principal Researcher\r\nMary Bellard, Senior Accessibility Architect\r\n\r\nSummary:\r\n<\/b>Developing successful sign language recognition and translation systems requires expertise in a wide range of fields, including computer vision, computer graphics, natural language processing, linguistics, and Deaf culture. In an effort to support people who are Deaf and Hard-of-Hearing to communicate in the language of their choice in more scenarios, we want to better understand advancements in sign language recognition and translation, and explore what is possible in this area of technology.\r\n\r\nMicrosoft is bringing together a diverse group of experts with relevant skills for a 2-day workshop, to discuss the state-of-the-art, imminent challenges, and possible solutions in the area of sign language recognition and translation. A main goal for the workshop is to produce a map of the current landscape and a set of challenges for the community to tackle next.","tab-content":[{"id":0,"name":"Schedule","content":"Date:<\/strong> Monday, February 25, 2019 \u2013 Tuesday, February 26, 2019\r\n\r\nTime:<\/strong> 9:00 AM \u2013 8:00 PM, 9:00 AM \u2013 3:00 PM\r\n\r\nLocation:<\/strong> @ Microsoft Research Redmond, Building 99, room 1919\u00a0\u00a0 (1927 and 1915 available for breakouts if needed)\r\n\r\nMonday, February 25, 2019<\/strong>\r\n\r\n<tbody\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
08:00 AM <\/span><\/td>\r\nBus leaves from hotel for workshop location<\/td>\r\n <\/span><\/td>\r\n<\/tr>\r\n
08:30 AM\u00a0<\/span><\/td>\r\nBreakfast Available, Check-in<\/span><\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
09:00 AM\u00a0<\/span><\/td>\r\nWelcome<\/td>\r\nMary Bellard\u00a0<\/span><\/td>\r\n<\/tr>\r\n
09:30 AM\u00a0<\/span><\/td>\r\nIce Breaker\r\n\r\n <\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
10:30 AM\u00a0<\/span><\/td>\r\nAn Introduction to Deaf Culture<\/strong>\u00a0<\/span>\r\n\r\nAbstract: This introduction will briefly cover all elements of Deaf culture, language, accessibility, and cultural competence.\u00a0 With so many products and technology developed out there by non-Deaf people, the Deaf community is struggling with a \"disability designation\" when they face barriers that could have been avoided if the developers took Deaf people in consideration for universal design to keep our world accessible to all people and empowering to Deaf people themselves.<\/td>\r\nLance Forshay<\/td>\r\n<\/tr>\r\n
11:15 AM\u00a0<\/span><\/td>\r\nDesigning Technology for Sign Languages and their Communities\u00a0<\/strong>\r\n\r\nAbstract: Language is a technology of the human body. Languages emerge and evolve to most efficiently communicate what their users need, can use and will use. Originating in deaf communities around the world, sign languages exploit the capacities of the visual-manual modality and the grammatical use of body-space configurations for communicative function. Users of sign language, however, are a diverse group of individuals, especially with respect to hearing status and linguistic competence and experience. This diversity impacts how sign languages are used in the practice of everyday life, which one needs to account for in designing sign language tools. This talk covers the key elements of sign language structure that are relevant for designing language technologies involving sign language and their users.\u00a0\u00a0<\/span><\/td>\r\nDeniz Ilkbasaran\u00a0<\/span><\/td>\r\n<\/tr>\r\n
12:00 PM\u00a0<\/span><\/td>\r\nLunch\u00a0<\/span><\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
01:00 PM\u00a0<\/span><\/td>\r\nPutting Words into Computers<\/strong>\u00a0<\/span>\r\n\r\nAbstract: The field of natural language processing (NLP) is now delivering powerful technologies: computers can translate from one language to another, answer questions, and hold conversations.\u00a0 In this relatively non-technical talk, I will trace the evolution of one aspect of NLP programs: how we put English (or other-language) words into computers.\u00a0 While this talk won't teach you everything about NLP, it will illuminate some of its toughest challenges and most exciting recent advances.\u00a0<\/span><\/td>\r\nNoah Smith\u00a0<\/span><\/td>\r\n<\/tr>\r\n
01:45 PM<\/td>\r\nComputer Vision Meets Speech Recognition: Challenges and Recent Development of Sign Language Recognition<\/strong>\r\n\r\nAbstract: This talk will present recent advances in the field of sign language recognition observed from an interdisciplinary view point at the intersection of speech recognition and computer vision. We will show several examples, analyze available data sets and trending methods. We will also understand what challenges remain to be tackled.<\/td>\r\nOscar Koller<\/td>\r\n<\/tr>\r\n
02:30 PM\u00a0<\/span><\/td>\r\nBreak\u00a0<\/span><\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
03:00 PM\u00a0<\/span><\/td>\r\nLearning from Human Movements to Create Accurate Sign Language Animations<\/strong>\u00a0<\/span>\r\n\r\nAbstract: There is great diversity in the levels of English reading skill and in the language preferences among members of the U.S. Deaf Community, and many individuals prefer to receive information in the form of American Sign Language (ASL). Therefore, providing ASL on websites can make information and services more accessible. Unfortunately, video recordings of human signers are difficult to update when information changes, and there is no way to support just-in-time generation of website content from a user request. Software is needed that can automatically synthesize understandable animations of a virtual human performing ASL, based on an easy-to-update script as input.\u00a0The challenge is for this software to select the details of such animations so that they are linguistically accurate, understandable, and acceptable to users.\u00a0 This talk will provide an overview of Huenerfauth's research in using machine-learning techniques to model human movements. His methodology includes: video and motion-capture data collection from signers to collect a corpus of ASL, linguistic annotation of this corpus, statistical modeling techniques, animation synthesis, and experimental evaluation studies with native ASL signers.\u00a0In this way, his laboratory has found models that underlie the accurate and natural movements of virtual human characters performing ASL. In recent work, his laboratory has created models to predict essential speed and timing parameters for such animations.\u00a0<\/span><\/td>\r\nMatt Huenerfauth\u00a0<\/span><\/td>\r\n<\/tr>\r\n
03:45 PM\u00a0<\/span><\/td>\r\nCrowdsourcing Sign Language Data through Educational Resources and Games<\/strong>\r\nAbstract: Sign language users lack many fundamental resources. At the same time, computer scientists working on sign language modeling and translation often lack appropriate training data. In this talk, I present the opportunity to design sign language resources that simultaneously meet community needs and collect large corpuses of sign language data to support computational efforts. I will demonstrate this potential through three main systems: 1) ASL-Search, a feature-based ASL dictionary trained on crowdsourced data from volunteer ASL students, 2) ASL-Flash, a site that both helps people learn ASL and collects feature evaluations of signs, and 3) ASL-Video, a platform for collecting sign language videos from diverse signers. This is joint work by researchers at Microsoft Research, the University of Washington, and Boston University.<\/td>\r\nDanielle Bragg\u00a0<\/span><\/td>\r\n<\/tr>\r\n
04:30 PM\u00a0<\/span><\/td>\r\nBreak\u00a0<\/span><\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
04:45 PM\u00a0<\/span><\/td>\r\nPanel: Technology and Sign Language Users<\/strong>\u00a0<\/span><\/td>\r\nModerator:<\/b> Larwan Berke\u00a0<\/span>\r\n\r\nPanelists:<\/b> Michael Anthony, Lance Forshay, Leah Katz-Hernandez, Christian Vogler\u00a0<\/span><\/td>\r\n<\/tr>\r\n
05:30 PM\u00a0<\/span><\/td>\r\nReflection and Breakout Planning\u00a0<\/span><\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
05:45 PM\u00a0<\/span><\/td>\r\nBoard bus from workshop to restaurant\u00a0<\/span><\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
06:00-08:00 PM\u00a0<\/span><\/td>\r\nBanquet\u00a0 (by invitation only)<\/span><\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n \r\n\r\nTuesday, February 26, 2019<\/b>\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n
08:00 AM <\/span><\/td>\r\nHotel guests board bus to workshop location<\/td>\r\n <\/span><\/td>\r\n<\/tr>\r\n
08:30 AM\u00a0<\/span><\/td>\r\nBreakfast Available\u00a0<\/span><\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
09:00 AM\u00a0<\/span><\/td>\r\nTask for the Day<\/td>\r\nDanielle Bragg\u00a0<\/span><\/td>\r\n<\/tr>\r\n
09:30 AM\u00a0<\/span><\/td>\r\nBreakout Sessions<\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
12:00 PM\u00a0<\/span><\/td>\r\nLunch<\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
01:00 PM\u00a0<\/span><\/td>\r\nBreakout Results Presentations<\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n
02:30 PM\u00a0<\/span><\/td>\r\nDiscussion and Next Steps<\/td>\r\n\u00a0Meredith Ringel Morris<\/span><\/td>\r\n<\/tr>\r\n
03:00 PM\u00a0<\/span><\/td>\r\nEnd \u00a0<\/span><\/td>\r\n\u00a0<\/span><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>"},{"id":1,"name":"Speakers","content":"

Michael Anthony<\/h3>\r\nMicrosoft\r\n\r\nMichael is a deaf software engineer at Microsoft developing the core commerce platform services that drives our e-commerce. A graduate from RIT with a dual BS in Computer Science as well as Game Development & Design, he is always looking at incorporating technology in our daily lives as well as unexpected places. Michael also services on the steering committee for Gaming and Disability (a Gaming4Everyone community).\r\n

Mary Bellard<\/h3>\r\nMicrosoft\r\n\r\nMary Bellard is the Senior Accessibility Architect at Microsoft. There, she leads the accessibility innovation program to bring more inclusive and revolutionary ideas to market, and helped launch the Microsoft AI for Accessibility program<\/a>. Previously, she strategized the overall accessibility training curriculum for employees and external partners to drive progress in usable experiences for everyone. Mary played a key role in developing the Microsoft Disability Answer Desk<\/a> when she joined the company in 2014 and has worked as an advocate for the disability community for more than 11 years.\r\n

Larwan Berke<\/h3>\r\nRochester Institute of Technology\r\n\r\nI am a 4th year PhD Candidate in Computing and Information Sciences at RIT studying Human-Computer Interaction, specifically Automatic Speech Recognition as an accessibility tool for DHH individuals.\r\n

Danielle Bragg<\/h3>\r\nMicrosoft Research\r\n\r\nDanielle Bragg is a postdoctoral researcher at Microsoft Research New England<\/a>. Her research focuses on developing systems that expand access to information for people with disabilities, in particular sign language users and low-vision readers. Her work is highly interdisciplinary, combining Accessibility, Human-Computer Interaction, and Applied Machine Learning. She takes data-driven approaches to address accessibility problems, striving to help make the world a more equitable place. Her diverse past research projects have spanned computational biology, computer music, applied mathematics, data visualization, and network protocols. She recently completed her PhD in Computer Science & Engineering at the University of Washington<\/a>, advised by Richard Ladner<\/a>. Before starting her PhD, she received her AB in Applied Mathematics from Harvard University.\r\n

Lance Forshay<\/h3>\r\nDepartment of Linguistics, University of Washington\u00a0<\/span>\r\n\r\nLance was born Deaf and has at least five generations of Deaf family members. (30-35 Deaf relatives around the country)\u00a0 He graduated from Kansas School for the Deaf and Gallaudet University.\u00a0 He has a BA in Mathematics and Secondary Education.\u00a0 He also has a MS in Ministry from Southern Christian University (Now Ambridge University), Montgomery, AL. Lance came to the University of Washington when it started it\u2019s ASL program in 2007.\u00a0 Since that time the program has expanded into a three-year ASL and Deaf Studies Minor Program.\u00a0 He played a big role in UW ASL Club\u2019s affiliation with the national ASL Honors Society and the establishment of the D-Center, a student center with collaboration between students in the Disability Studies program and the Deaf Culture\/ASL Studies program, which is the first of its kind among American universities and colleges.\u00a0 He was promoted to the rank of Senior Lecturer in 2013 and was a recipient of the 2014 UW Distinguished Teaching Award.\u00a0 He has 30 years of ASL teaching experience.\u00a0<\/span>\r\n\r\nLance is a past president of the Washington ASL Teacher Association, a member of the national ASL Teacher Association, Washington State Association of the Deaf, Deaf Ministry Director at Lighthouse Christian Center in Puyallup and Deaf Political Action Coalition to fight for Deaf rights in the state of Washington.\u00a0 He has given many workshops and presentations on ASL grammar, Deaf culture topics and issues related to cultural and language oppressions in the Deaf world for interpreters, ASL teachers and the Deaf community.\u00a0 He has been on the board for several agencies and organizations and is currently on the board for Deaf Missions in Council Bluffs, Iowa.\u00a0 He lives in Puyallup with his wife, Joan.\u00a0 Their kids, Matthew and Samantha, are college students at Seattle Pacific University and Gallaudet University respectively.\u00a0 His hobbies include yogafaith, hiking, bike riding and gardening.\u00a0\u00a0\u00a0<\/span>\r\n

Matt Huenerfauth<\/h3>\r\nRochester Institute of Technology (RIT)\u00a0<\/span>\r\n\r\nProfessor Matt Huenerfauth directs the Center for Accessibility and Inclusion Research (CAIR) at the Rochester Institute of Technology (RIT), where he directs a research group of 29 students, who operate bilingually in English and American Sign Language (ASL).\u00a0 His research focuses on the design of computing technology to benefit people who are Deaf or Hard of Hearing (DHH) or with low levels of literacy.\u00a0 He is editor-in-chief of the leading journal in computer accessibility (TACCESS) and has served as general and program chair for the ASSETS conference, the premier research venue in computing accessibility.\u00a0 He has secured over $4.5 million in research funding, including a National Science Foundation CAREER Award in 2008, and he has authored over 75 peer-reviewed journal articles, chapters, and conference papers.\u00a0 Huenerfauth is a four-time winner of the Best Paper Award at ASSETS (more than any other individual in the conference history) and is author of a CHI'18 honorable mention paper.\u00a0 In 2018, RIT awarded Huenerfauth the Trustees Scholarship Award, which is the university's highest research award for a faculty member.\u00a0 In 2017, the Association for Computing Machinery recognized him as a Distinguished Member for his contributions to the computing field, and he was twice elected Vice Chair of the ACM SIGACCESS special interest group on accessible computing (2015-2021).\u00a0 He received a Ph.D. in Computer and Information Science from the University of Pennsylvania in 2006.\u00a0<\/span>\r\n

Deniz Ilkbasaran<\/h3>\r\nCenter for Research in Language, UCSD\u00a0<\/span>\r\n\r\nDeniz Ilkbasaran<\/a> is a postdoctoral researcher at the University of California, San Diego, where she is jointly appointed at the Mayberry Laboratory for Multimodal Language Development<\/a> and the Padden Lab<\/a> at Center for Research in Language (CRL<\/a>). She works with Rachel Mayberry on her NIH funded project investigating the consequences of late sign language acquisition in deaf people, particularly regarding their language abilities and neurolinguistic organization as adults. With Carol Padden, she is involved with comparative research on emerging and established sign languages around the world.\r\n\r\nIlkbasaran received her Ph.D. in Communication<\/a> from UC San Diego in 2015, and her M.A. in Educational Technology<\/a> from Concordia University in 2007. She has been involved with sign language research since 2002, working with sign languages and their communities across Turkey, Canada, Israel and the United States. Her main research interests include deaf people\u2019s social and communicative practices and their intersection with technologies.\u00a0\u00a0<\/span>\r\n\r\n

Leah Katz-Hernandez<\/h3>\r\nMicrosoft\r\n\r\nCurrently serving as Manager, CEO Communications, for Satya Nadella, CEO of Microsoft, Leah is a dedicated communications and community engagement professional. \r\n\r\nPrior to Microsoft, Leah was known as the celebrated ROTUS, Receptionist of the United States, for President Obama. The first ever deaf person to hold the position, she was appointed to the West Wing after serving in First Lady Michelle Obama\u2019s communications office and for the Obama campaign during the 2012 election cycle. During the 2008 presidential campaign, Leah\u2019s groundbreaking grassroots digital communications resulted in an award-winning blog and international attention. She has also has served in the offices of the United States Congress for both Republican and Democratic Members. BA in Government from Gallaudet University, MA in Strategic Communication from American University.\r\n <\/span>\r\n\r\n\r\n

Oscar Koller<\/h3>\r\nMicrosoft\r\n\r\nOscar Koller is an applied scientist at Microsoft. His research covers statistical modeling of sign and spoken languages, encompassing audio and video signals. Before joining Microsoft, Oscar was a PhD candidate following a dual supervision by Prof. Ney (RWTH Aachen, Germany) and Prof. Bowden (University of Surrey, UK) with whom he worked on non-intrusive vision-based continuous sign language recognition. Oscar received his MSc from TU-Berlin, Germany and also volunteered 12 months in the Ashanti School for the Deaf, Ghana.\r\n\r\n

Meredith Ringel Morris<\/h3>\r\nMicrosoft Research\r\n\r\nMeredith Ringel Morris<\/a> is a Principal Researcher at Microsoft Research, where she is also the Research Manager of the Ability<\/a> team. She is also an affiliate Professor at the University of Washington. Her research focuses on human-computer interaction and accessibility. Merrie earned her Ph.D. in computer science from Stanford University.\r\n\r\n

Noah Smith<\/h3>\r\nUniversity of Washington \/ Allen Institute for Artificial Intelligence\u00a0<\/span>\r\n\r\nNoah Smith<\/a> is a Professor in the Paul G. Allen School of Computer Science & Engineering<\/a> at the University of Washington<\/a>, as well as a Senior Research Manager at the Allen Institute for Artificial Intelligence<\/a>. Previously, he was an Associate Professor of Language Technologies and Machine Learning in the School of Computer Science<\/a> at Carnegie Mellon University<\/a>. He received his Ph.D. in Computer Science from Johns Hopkins University<\/a> in 2006 and his B.S. in Computer Science and B.A. in Linguistics from the University of Maryland<\/a> in 2001. His research interests include statistical natural language processing, machine learning, and applications of natural language processing, especially to the social sciences. His book, Linguistic Structure Prediction<\/i><\/a>, covers many of these topics. He has served on the editorial boards of the journals Computational Linguistics<\/i><\/a> (2009\u20132011), Journal of Artificial Intelligence Research<\/i><\/a> (2011\u2013present), and Transactions of the Association for Computational Linguistics<\/i><\/a> (2012\u2013present), as the secretary-treasurer of SIGDAT<\/a> (2012\u20132015 and 2018\u2013present), and as program co-chair of ACL 2016<\/a>. Alumni of his research group, Noah's ARK<\/a>, are international leaders in NLP in academia and industry; in 2017 UW's Sounding Board<\/a> team won the inaugural Amazon Alexa Prize. Smith's work has been recognized with a UW Innovation award (2016\u20132018), a Finmeccanica career development chair at CMU (2011\u20132014), an NSF CAREER award (2011\u20132016), a Hertz Foundation<\/a> graduate fellowship (2001\u20132006), numerous best paper nominations and awards, and coverage by NPR, BBC, CBC, New York Times<\/i>, Washington Post<\/i>, and Time<\/i>.\u00a0\u00a0<\/span>\r\n

Christian Vogler, PhD<\/h3>\r\nDirector, Technology Access Program, Gallaudet University\r\n\r\nDr. Christian Vogler is the director of the Technology Access Program (TAP) at Gallaudet University, a research group focused on accessible tech for the deaf and hard of hearing. He is a principal investigator within the Rehabilitation Engineering Research Center (RERC) on Technology for the Deaf and Hard of Hearing, as well as the Disability and Rehabilitation Research Project on Twenty-First Century Captioning. He also leads research into Telecommunications Relay Services access and usability, and has co-led Gallaudet University's collaboration with SignAll, a company focused on sign language recognition technology. Prior to joining TAP in 2011, Dr. Vogler has worked on various research projects related to sign language recognition and facial expression recognition from mocap and video at the University of Pennsylvania; the Gallaudet Research Institute; UNICAMP in Campinas, Brazil; and the Institute for Language and Speech Processing in Athens, Greece."},{"id":2,"name":"Transportation","content":"Workshop Location:<\/b> Microsoft Building 99, 14820 NE 36th St, Redmond, WA 98052\r\n\r\nWorkshop Hotel:<\/b> Courtyard by Marriott (Bellevue\/Redmond), 14615 NE 29th Place Bellevue, WA 98007\r\n\r\nClosest Airport:<\/b> Seattle-Tacoma International Airport (abbreviated Sea-Tac or SEA)\r\n\r\nDinner Location on Feb. 25:<\/b> Bai Tong Restaurant, 14804 NE 24th St, Redmond, WA 98052\r\n\r\nNote that Microsoft has chartered a bus to bring guests staying at the hotel to and from the workshop. The bus will be outside the hotel at 8 a.m. each morning. Bus transport from the workshop to dinner and from dinner back to the hotel is also provided. Note that the hotel is also a short (10 minute) walk from the workshop location, for guests who would prefer to walk.<\/b>"}],"msr_startdate":"2019-02-25","msr_enddate":"2019-02-26","msr_event_time":"9:00 AM \u2013 8:00 PM, 9:00 AM \u2013 3:00 PM","msr_location":"Microsoft Research Redmond, Building 99, room 1919 (1927 and 1915 available for breakouts if needed)","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"February 25, 2019","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":null,"event_excerpt":"A 2-day academic workshop, to discuss the state-of-the-art, imminent challenges, and possible solutions in the area of sign language recognition and translation.","msr_research_lab":[199563,199565],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[283244],"related-projects":[],"related-opportunities":[],"related-publications":[599343],"related-videos":[],"related-posts":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/559521"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":6,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/559521\/revisions"}],"predecessor-version":[{"id":566331,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/559521\/revisions\/566331"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=559521"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=559521"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=559521"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=559521"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=559521"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=559521"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=559521"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=559521"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=559521"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}