{"id":730651,"date":"2021-03-08T09:43:27","date_gmt":"2021-03-08T17:43:27","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=730651"},"modified":"2021-03-08T09:43:29","modified_gmt":"2021-03-08T17:43:29","slug":"learning-visuomotor-policies-for-autonomous-systems-from-event-based-cameras","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/learning-visuomotor-policies-for-autonomous-systems-from-event-based-cameras\/","title":{"rendered":"Learning visuomotor policies for autonomous systems from event-based cameras"},"content":{"rendered":"\n
\"\"\/<\/figure>\n\n\n\n

Editor\u2019s note: This research was conducted by Sai Vemprala, Senior Researcher, and <\/em>Ashish Kapoor<\/em> (opens in new tab)<\/span><\/a>, Partner Researcher, of Microsoft Research along with <\/em>Sami Mian<\/em> (opens in new tab)<\/span><\/a>, who was a PhD Researcher at the University of Pittsburgh and an intern at Microsoft at the time of the work.<\/em><\/p>\n\n\n\n

Autonomous systems are composed of complex perception-action loops, where observations of the world need to be processed in real time to result in safe and effective actions. A significant amount of research has focused on creating perception and navigation algorithms for such systems, often using visual data from cameras to reason about which action to take depending on the platform and task at hand.<\/p>\n\n\n\n

While there have been a lot of improvements in how this reasoning is performed, and how information can be extracted efficiently from camera imagery, there are a number of challenges when it comes to achieving autonomous systems that receive and process information both accurately and quickly enough for applications in real-world scenarios. These challenges include the speed limitations posed by commercial off-the-shelf cameras, data that is unseen during training of vision models, and the limitations of sensors in RGB camera sensors.<\/p>\n\n\n\n

\n\t