{"id":616557,"date":"2019-09-27T10:00:32","date_gmt":"2019-09-27T17:00:32","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=616557"},"modified":"2019-10-21T10:15:09","modified_gmt":"2019-10-21T17:15:09","slug":"learning-structured-models-for-safe-robot-control","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/learning-structured-models-for-safe-robot-control\/","title":{"rendered":"Learning Structured Models for Safe Robot Control"},"content":{"rendered":"

We are motivated by the problem of building autonomous robots that are able to perform complex tasks ranging from inspection and diagnosis to manipulating soft tissue. Learning from demonstration is a useful mechanism for a human expert to teach the robot, but effective robot learning also requires rich representations to fully represent these tasks.<\/p>\n

I will describe results from a few recent projects motivated in this way.<\/p>\n

Firstly, we will consider the problem of hybrid system identification, wherein the task is best modelled as a hybrid system combining low level continuous control with higher level switching logic. We will see how neural network models can be structured to effectively learn such models. While the learnt models can be directly executed in an autonomous system, the symbolic structure is also useful as a way to bias subsequent learning in new contexts. We will see an example of this, in the form of the preceptor gradients algorithm for learning to act.<\/p>\n

Next, building on this notion of grounding, I will describe recent work on structuring the latent spaces of variational auto-encoder models, utilizing the grouping of user-defined symbols and their corresponding sensory observations in order to align the learnt compressed latent representation with the semantic notions contained in the abstract labels.<\/p>\n

I will conclude by briefly describing how these ideas are coming together in an ambitious system being developed in our group, aimed at using autonomous robots in surgical tasks.<\/p>\n

[Slides<\/a>]<\/p>\n","protected":false},"excerpt":{"rendered":"

We are motivated by the problem of building autonomous robots that are able to perform complex tasks ranging from inspection and diagnosis to manipulating soft tissue. Learning from demonstration is a useful mechanism for a human expert to teach the robot, but effective robot learning also requires rich representations to fully represent these tasks. I […]<\/p>\n","protected":false},"featured_media":616566,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"research-area":[13561,13554],"msr-video-type":[206954],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-616557","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-research-area-algorithms","msr-research-area-human-computer-interaction","msr-video-type-microsoft-research-talks","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"https:\/\/youtu.be\/NIsy3MQJ1mE","msr_secondary_video_url":"","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/616557"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":2,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/616557\/revisions"}],"predecessor-version":[{"id":616572,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/616557\/revisions\/616572"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/616566"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=616557"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=616557"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=616557"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=616557"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=616557"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=616557"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}