Building multimodal, integrative AI systems with Platform for Situated Intelligence

In the last decade, we’ve seen fast-paced progress in many individual AI areas, such as computer vision, speech, and machine translation. However, anyone who’s tried bringing multiple AI technologies together in end-to-end systems designed for real-time, real-world interactions knows that constructing such systems remains a demanding task. Apart from research challenges that may arise in the context of a given application, the construction of multimodal, integrative AI systems is often daunting from an engineering perspective.

In this webinar, Dan Bohus, a Senior Principal Researcher in the Perception and Interaction Group at Microsoft Research, will introduce Platform for Situated Intelligence, an open-source framework that aims to address these challenges and accelerate and simplify the development, study, debugging, and maintenance of multimodal, integrative AI systems. The framework provides infrastructure for working with temporal streams of data; an efficient model for parallel, coordinated computation; rich tools for multimodal data visualization, annotation, and processing; and an open ecosystem of components that encapsulate various AI technologies. Bohus will break down the capabilities of Platform for Situated Intelligence and demonstrate how to write a very simple application using the framework, as well as how to use the available visualization tools.

Together, you’ll explore:

  • Challenges with building multimodal, integrative AI systems
  • A model for parallel, coordinated computation over temporal streams of data
  • Tools for data visualization and debugging
  • The available open ecosystem of components

Resource list:

*This on-demand webinar features a previously recorded Q&A session and open captioning.

Explore more Microsoft Research webinars: https://aka.ms/msrwebinars (opens in new tab)

Date:
Speakers:
Dan Bohus
Affiliation:
Microsoft Research