Teaching a robot to see and navigate with simulation

The ability to see and navigate is a critical operational requirement for robots and autonomous systems. However, building a real-world autonomous system that can operate safely at scale is a very difficult task. The partnership between Microsoft Research and Carnegie Mellon University is continuing to advance state of the art in the area of autonomous systems through research focused on solving real-world challenges such as autonomous mapping, navigation, and inspection of underground urban and industrial environments. Simultaneous Localization and Mapping (SLAM) is one of the most fundamental capabilities necessary for robots. We explore how SLAM is fundamentally different and complicated due to the sequential nature of recognizing landmarks (such as buildings and trees) in a dynamic physical environment while driving or flying through it versus the recognition of static images, object recognition, or activity recognition.

Learn more in this blog: http://approjects.co.za/?big=en-us/research/blog/teaching-a-robot-to-see-and-navigate-with-simulation

Date: