Abstract:<\/strong>
\nCollaborative multi-agent reinforcement learning research often makes two key assumptions: (1) we have control of all agents on the team; and (2) maximising team reward is all you need. However, to enable human-AI collaboration, we need to break both of these assumptions. In this talk I will formalise the problem of ad-hoc teamwork and present our proposed approach to meta-learn policies robust to a given set of possible future collaborators. Then talk about recent work on modelling human play, showing reward maximisation may not be sufficient when trying to entertain billions of players worldwide.<\/p>\n\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014
\nLinks
\nSam Devlin
\nSite: aka.ms\/samdevlin
\nTwitter: x.com\/smdvln<\/p>\n
ICARL
\nSite: icarl.doc.ic.ac.uk
\nTwitter: x.com\/ic_arl
\nYouTube: youtube.com\/ICARLSeminars
\n\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014<\/p>\n
Intro and Outro music courtesy of Bensound.com – Funky Suspense by Benjamin Tissot<\/p>\n","protected":false},"excerpt":{"rendered":"
Towards Ad-Hoc Teamwork for Improved Player Experience ICARL Seminar Series – 2022 Winter Seminar by Sam Devlin Abstract: Collaborative multi-agent reinforcement learning research often makes two key assumptions: (1) we have control of all agents on the team; and (2) maximising team reward is all you need. However, to enable human-AI collaboration, we need to […]<\/p>\n","protected":false},"featured_media":954015,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-video-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-954003","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"http:\/\/www.youtube.com\/watch?v=Oz5uIQPc_O8","msr_secondary_video_url":"https:\/\/www.youtube.com\/watch?v=Oz5uIQPc_O8","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/954003"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":2,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/954003\/revisions"}],"predecessor-version":[{"id":954024,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/954003\/revisions\/954024"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/954015"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=954003"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=954003"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=954003"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=954003"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=954003"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=954003"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=954003"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}