{"id":421608,"date":"2017-07-03T00:00:17","date_gmt":"2017-07-03T07:00:17","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=421608"},"modified":"2022-01-04T07:04:31","modified_gmt":"2022-01-04T15:04:31","slug":"policy-gradient-methods-tutorial-new-frontiers","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/policy-gradient-methods-tutorial-new-frontiers\/","title":{"rendered":"Policy Gradient Methods: Tutorial and New Frontiers"},"content":{"rendered":"

In this tutorial we discuss several recent advances in deep reinforcement learning involving policy gradient methods. These methods have shown significant success in a wide range of domains, including continuous-action domains such as manipulation, locomotion, and flight. They have also achieved the state of the art in discrete action domains such as Atari. We will provide a unifying overview of a variety of different policy gradient methods, and we will also discuss the formalism of stochastic computation graphs for computing gradients of expectations.<\/p>\n","protected":false},"excerpt":{"rendered":"

In this tutorial we discuss several recent advances in deep reinforcement learning involving policy gradient methods. These methods have shown significant success in a wide range of domains, including continuous-action domains such as manipulation, locomotion, and flight. They have also achieved the state of the art in discrete action domains such as Atari. We will […]<\/p>\n","protected":false},"featured_media":421617,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"research-area":[13561,13556,13546],"msr-video-type":[238869],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-421608","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-research-area-algorithms","msr-research-area-artificial-intelligence","msr-research-area-computational-sciences-mathematics","msr-video-type-cambridge-phd-summer-school","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"https:\/\/youtu.be\/y4ci8whvS1E","msr_secondary_video_url":"","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/421608"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/421608\/revisions"}],"predecessor-version":[{"id":421611,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/421608\/revisions\/421611"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/421617"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=421608"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=421608"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=421608"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=421608"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=421608"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=421608"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}