AI and Accessibility: A Discussion of Ethical Considerations

  • Meredith Ringel Morris

Communications of the ACM |

Author's Version

According to the World Health Organization, more than one billion people worldwide have disabilities. The field of disability studies defines disability through a social lens; people are disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers; for example, computer vision might help people who are blind better sense the visual world, speech recognition and translation technologies might offer real time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with limited mobility. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users; however, ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered.

Our Responsibility: Disability, Bias, and AI

Presented by Natasha Crampton and Meredith Ringel Morris at Microsoft’s 2020 Ability Summit, Microsoft AI offers tremendous potential for empowering people with disabilities and is already delivering on that promise. Yet, AI also raises new challenges related to fairness and inclusion, which need to be identified and mitigated in a principled and intentional way. Learn about Microsoft’s approach to responsible AI, as well as some key research directions for AI and accessibility.

Designing Computer Vision Algorithms to Describe the Visual World to People Who Are Blind or Low Vision

A common goal in computer vision research is to build machines that can replicate the human vision system (for example, detect an object or scene category, describe an object or scene, or locate an object). A natural grand challenge for the artificial intelligence community is to design such technology to assist people who are blind to overcome their real daily visual challenges.

In this webinar with Dr. Danna Gurari, Assistant Professor in the School of Information at the University of Texas at Austin, and Dr. Ed Cutrell, Senior Principal Researcher in the Microsoft Research Ability Group, learn how computer vision researchers are working to create vision systems adapted to the needs of those who use them. By creating new dataset challenges, the researchers aim to empower the artificial intelligence community to work on real use cases.

To encourage the larger artificial intelligence community to collaborate on developing methods for assistive technology, we introduce the first dataset challenges with data that originates from people who are blind. Our data comes from over 11,000 people in real-world scenarios who were seeking to learn about the physical world around them. More broadly, this dataset serves as a great catalyst for uncovering hard artificial intelligence challenges that must be addressed to create more robust systems across many contexts and scenarios.

Together, we’ll explore:

  • Creating tools for people who are blind or have low vision that match their needs and complement their capabilities
  • Key challenges of teaching computers how to automatically describe pictures taken by people who are blind or low vision
  • Several potential solutions to make computers more accurately address the needs of people who are blind or low vision

Resource list:

*This on-demand webinar features a previously recorded Q&A session and open captioning.

This webinar originally aired on March 26, 2020

Explore more Microsoft Research webinars: https://aka.ms/msrwebinars (opens in new tab)