{"id":862233,"date":"2022-07-28T01:29:04","date_gmt":"2022-07-28T08:29:04","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/"},"modified":"2023-08-23T09:07:22","modified_gmt":"2023-08-23T16:07:22","slug":"mocapact-a-multi-task-dataset-for-simulated-humanoid-control","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mocapact-a-multi-task-dataset-for-simulated-humanoid-control\/","title":{"rendered":"MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control"},"content":{"rendered":"

Control of simulated humanoid characters is a challenging benchmark for sequential decision-making methods, as it assesses a policy\u2019s ability to drive an inherently unstable, discontinuous, and high-dimensional physical system. One widely studied approach is to utilize motion capture (MoCap) data to teach the humanoid agent low-level skills (e.g., standing, walking, and running) that can be used to generate high-level behaviors. However, even with MoCap data, controlling simulated humanoids remains very hard, as MoCap data offers only kinematic information. Finding physical control inputs to realize the demonstrated motions requires computationally intensive methods like reinforcement learning. Thus, despite the publicly available MoCap data, its utility has been limited to institutions with large-scale compute. In this work, we dramatically lower the barrier for productive research on this topic by training and releasing high-quality agents that can track over three hours of MoCap data for a simulated humanoid in the dm_control physics-based environment.<\/p>\n

\n\t\n\t\tCode on GitHub\t<\/a>\n\n\t \n\t\n\t\tProject page\t<\/a>\n\n\t<\/p>\n

<\/div>\n

Motion Capture with Actions (MoCapAct) Dataset<\/h3>\n

We release MoCapAct, a dataset of these expert agents and their rollouts containing proprioceptive observations and actions. We demonstrate the utility of MoCapAct by using it to train a single hierarchical policy capable of tracking the entire MoCap dataset within dm_control and show the learned low-level component can be re-used to efficiently learn high-level other tasks. Finally, we use MoCapAct to train an autoregressive GPT model and show that it can perform natural motion completion given a motion prompt.<\/p>\n

*Note that the dataset download link cannot be used directly in a browser<\/em><\/strong><\/p>\n\n\t\n\t\tDownload Dataset\t<\/a>\n\n\t\n

<\/div>\n

How do I use a download link for an entire dataset?<\/strong><\/p>\n

A download link for an entire dataset provides the location of the dataset in Azure as well as a special time-limited key that allows you to download the entire dataset. Copy the button link above and use it with tools that can copy files from Azure, like the following:<\/p>\n