Learning Structured Models for Safe Robot Control

We are motivated by the problem of building autonomous robots that are able to perform complex tasks ranging from inspection and diagnosis to manipulating soft tissue. Learning from demonstration is a useful mechanism for a human expert to teach the robot, but effective robot learning also requires rich representations to fully represent these tasks.

I will describe results from a few recent projects motivated in this way.

Firstly, we will consider the problem of hybrid system identification, wherein the task is best modelled as a hybrid system combining low level continuous control with higher level switching logic. We will see how neural network models can be structured to effectively learn such models. While the learnt models can be directly executed in an autonomous system, the symbolic structure is also useful as a way to bias subsequent learning in new contexts. We will see an example of this, in the form of the preceptor gradients algorithm for learning to act.

Next, building on this notion of grounding, I will describe recent work on structuring the latent spaces of variational auto-encoder models, utilizing the grouping of user-defined symbols and their corresponding sensory observations in order to align the learnt compressed latent representation with the semantic notions contained in the abstract labels.

I will conclude by briefly describing how these ideas are coming together in an ambitious system being developed in our group, aimed at using autonomous robots in surgical tasks.

[Slides]

Date:
Speakers:
Subramanian Ramamoorthy
Affiliation:
University of Edinburgh

Series: Microsoft Research Talks