Empowering every person and organization on the planet to achieve more requires giving ownership of the computing experience to the individual and leveraging technological advancements to deliver products, tools, and services that people can make work for them—whatever their circumstances or abilities. Over the years, Microsoft Research has collaborated closely with people with disabilities and those who support them to thoughtfully and creatively innovate around this commitment to inclusive design and accessible technology. Below is a sampling of those efforts. To learn more, explore researchers’ experience developing the teachable AI tool Find My Things or, for research in assistive technologies and beyond, explore Microsoft Research.
-
2024Sep
Find My Things recognized for innovative design
Find My Things, the object recognition tool that can be personalized from a few videos of an item and is available in the Seeing AI mobile app, is a finalist in the accessible design and artificial intelligence categories of the US-based business media brand Fast Company’s Innovation by Design Awards. Find My Things was developed by members of the Microsoft Research Teachable AI Experiences (Tai X) team and a group of citizen designers and was integrated into Seeing AI earlier in the year.
-
2023
Foundation models used to support blind community
As part of Microsoft Research’s Accelerate Foundation Models Research (AFMR) initiative, a team from Waseda University is developing a system that will leverage vision and language foundation models to help people who are blind or have low vision with outdoor navigation.
-
2023Jul
ASL Citizen dataset released
Microsoft releases ASL Citizen, the first crowdsourced isolated sign language dataset. Built in collaboration with members of the Deaf community, ASL Citizen aims to tackle the data shortage preventing the advancement of AI systems that support sign language users. It’s a challenge that Microsoft researchers have been targeting for years, including with a sign language game for addressing inaccurate labels, a lack of real-world settings, and other issues present in existing sign language datasets and an exploration of fairness, accountability, transparency, and ethics considerations in collecting sign language data.
-
2022Sep
PeopleLens inspires Find My Things
Students in the United Kingdom between the ages of 5 and 11 test the advanced research prototype PeopleLens. The head-worn device leverages AI and spatialized audio to identify people in a room, helping users situate themselves in social scenarios and more confidently interact with those around them. During development, the PeopleLens research team identified the value personalization could add to such an experience. Given the complexity of social encounters, the team opted to examine personalization in a more straightforward application—object recognition—planting the seed for the personalizable object recognizer Find My Things.
-
2021Oct
Ludic Design for Accessibility prioritizes play
Researchers introduce Ludic Design for Accessibility (LDA), a multistep approach that prioritizes play and exploration in the design of assistive technologies. LDA was inspired by earlier work that leveraged spatial audio technology to help make mainstream video games enjoyable for people with low vision. The approach has informed Microsoft Research projects such as an audio-based app designed to encourage physical activity among people who are blind or have low vision and a study of how teachers in low-resource schools believe digital games can best serve students who are blind. Recently, Microsoft Research India collaborated with the Srishti Manipal Institute of Art, Design and Technology to incorporate LDA into its curriculum, tasking students at the institute with designing play-based learning and skill-building experiences for children with disabilities.
-
2021Oct
ORBIT dataset and benchmark released
Microsoft releases the ORBIT dataset and benchmark. The project invited members of the blind and low-vision community to contribute videos of personal items they interact with regularly. These videos were used to build a training dataset that is more inclusive of the objects people who are blind or have low vision might use, such as guide canes, and more representative of the variation in quality of images and videos captured by people who are blind or have low vision—common challenges in existing datasets.
-
2021Oct
Alt-text experience studied
Researchers develop and test different interfaces for writing alt text from scratch and for providing feedback on AI-generated alt text within Microsoft PowerPoint, work that is influential in the alt-text authoring experience available in the presentation software. Prior work in alt text—that is, the copy used by screen readers to describe visual content to people who are blind or low vision—includes the study of people’s experiences with alt text across different types of websites and specifically when it comes to images on social media and improving the alt text experience in social media settings.
-
2021Apr
Teleworkers share perspectives during pandemic
The rapid and wide-scale adoption of remote work caused by the pandemic gives teleworkers with disabilities who were already participating in a study about work from home the opportunity to share how having their colleagues join them in working remotely affected their experience. This work, along with publications that examine the accessibility of remote work through the experience of neurodivergent professionals and that identify the accessibility barriers, opportunities, and conflicts specific to hybrid meetings, plays a role in expanding how Microsoft understands the impact of remote and hybrid work on people with disabilities.
-
2020Sep
Microsoft Expressive Pixels released
Microsoft Expressive Pixels is made available for free in the Microsoft Store. The development of Expressive Pixels, a platform for the creation and LED display of animations and images, was grounded in the Microsoft Research Enable team’s extensive work with people living with amyotrophic lateral sclerosis (ALS) and others in the ALS community and a goal of extending communication abilities for people with varying levels of speech and mobility. The work drew on expertise from across Microsoft, including the Microsoft Research Ability team.
-
2020Apr
Researchers work to make VR and AR accessible
Researchers with the Microsoft Research Ability Group receive honorable mention at the 2020 ACM CHI Conference on Human Factors in Computing Systems for their haptic and auditory white cane for navigating large, complex virtual environments. The team created a scavenger hunt to test the cane’s ability to give users a sense of the shapes of different virtual objects and different surface textures. The work is a follow-up to an earlier haptic cane controller and is among a line of research dedicated to making virtual and augmented reality more accessible, including to people with varying levels of mobility and who are deaf or have a hearing disability.
-
2020Feb
SeeingVR toolkit made open source
SeeingVR, a set of tools for improving the virtual reality experience for people with low vision, is made open source. The tools include visual and audio augmentations, such as magnification and brightness lenses and text-to-speech functionality.
-
2019Feb
Event explores sign language–based systems
Microsoft Research hosts a two-day interdisciplinary workshop to identify the opportunities and obstacles in the space of sign language recognition, generation, and translation and publishes key insights and findings from the workshop later in the year. The publication wins best paper at the International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS).
-
2019Jan
Code Jumper tech transferred
Microsoft transfers the research and technology behind Code Jumper, a physical programming language developed as part of Project Torino, to the nonprofit American Printing House for the Blind for broad distribution.
-
2018Mar
Hands-Free Music project receives SXSW award
The Microsoft Hands-Free Music project wins the SXSW Interactive Innovation Award for music and audio innovation; the project is also a finalist in the “innovation in connecting people” category. Developed in collaboration with members of the ALS community, the Hands-Free Music project is a suite of music-making and expressive tools that can be controlled via eye gaze and other hands-free interaction techniques, such as adaptive controllers. In the years since the SXSW honor, the project team has developed an eye gaze-controlled synthesizer and sequencer for live play and improvisation and the adaptive musical instrument Galactic Bell Star, among other work, and a group of wounded veterans used the project’s technologies to write an anthem for the Invictus Games.
-
2017Oct
Eye Control introduced in Windows 10
Eye Control is introduced in Windows 10 (opens in new tab), enhancing the way people, including those with varying levels of mobility, use their PCs for work and play. Eye Control leverages gaze-tracking technology, and with an appropriate eye-tracking device, users can type, direct their mouse, and activate text-to-speech via eye movement. The feature is a result of a collaboration between the Microsoft Research Enable team and the Windows Text Input development team (opens in new tab). The Enable team’s earlier work outfitting a motorized wheelchair with eye-tracking technology laid the groundwork. Microsoft later leads a consortium of technology companies to create the first USB HID (Human Interface Device) standard for eye tracking (opens in new tab) and releases a set of APIs behind Eye Control (opens in new tab) for use by the developer community.
-
2017Jul
Seeing AI released
Over a year after Microsoft CEO Satya Nadella introduces a prototype called Seeing AI at Build 2016, the mobile app is broadly released for use on iOS. Leveraging computer vision, natural language processing, and other technologies, Seeing AI assists people who are blind or low vision by identifying people and things in their surroundings, including scanning and reading aloud short text and documents. Since its release, Seeing AI features have expanded to include color and currency recognition (opens in new tab) and, more recently, richer descriptions and question-answering powered by generative AI (opens in new tab). Seeing AI, now also available on Android devices, has been recognized for its innovative design and impact, including by the US-based business media brand Fast Company (opens in new tab), the mobile industry group GSMA (opens in new tab), and the Federal Communications Commission (opens in new tab).
-
2014Nov
Audio-based prototype for exploration launched
Microsoft, Guide Dogs, and Future Cities Catapult launch the Cities Unlocked prototype. Cities Unlocked, which would come to be known as Microsoft Soundscape (opens in new tab), uses 3D audio cues to “unlock” users’ surroundings, identifying points of interest, street names and intersections, and the direction being traveled in real time to empower the exploration of new places by members of the blind and low-vision community. In 2018, Soundscape became available on iOS (opens in new tab) and was later made open source for broader development (opens in new tab), including by the tech-for-good organization Scottish Tech Army (opens in new tab).
-
2014Aug
Eye Gaze Wheelchair wins company hackathon
A motorized wheelchair controllable by a user’s eye movements takes top prize at the first Microsoft companywide hackathon. The work led to the creation of the Enable Group to further develop the wheelchair and other assistive technologies for those with neuromotor disabilities. The motivation behind the “Eye Gaze Wheelchair” was a call to action from former NFL football player Steve Gleason (opens in new tab), who had been diagnosed with ALS a few years earlier and uses a wheelchair to get around. Gleason was seeking tech help to navigate more independently and interact with his family more easily. He was an integral part of the development of the chair, and his foundation leveraged the technology behind it to help bring eye-drive capability to market in 2019 (opens in new tab).
Timeline contributors: Mary Bellard, Neeltje Berger, Danielle Bragg, David Celis Garcia, Matt Corwine, Ed Cutrell, Kristina Dodge, Martin Grayson, Alyssa Hughes, Daniela Massiceti, Amanda Melfi, Ann Paradiso, Brenda Potts, Carly Quill, Katie Recken, John Tang, Patti Thibodeau, Amber Tingle, Saqib Shaikh, Manohar Swaminathan, Sarah Wang, Larry West, and Katie Zoller. Code Jumper photo by Jonathan Banks for Microsoft.