{"id":837610,"date":"2022-04-25T09:00:00","date_gmt":"2022-04-25T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=837610"},"modified":"2022-08-17T09:08:11","modified_gmt":"2022-08-17T16:08:11","slug":"ppe-a-fast-and-provably-efficient-rl-algorithm-for-exogenous-noise","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/ppe-a-fast-and-provably-efficient-rl-algorithm-for-exogenous-noise\/","title":{"rendered":"PPE: A fast and provably efficient RL algorithm for exogenous noise"},"content":{"rendered":"\n
\"A<\/figure>\n\n\n\n

Picture a person walking in a park by a pond. The surrounding environment contains a number of moving objects that change the quality of the environment: clouds moving to hide the sun, altering the quality of light; ducks gliding across the pond, causing its surface to ripple; people walking along a path, their images reflecting on the water. If we\u2019re creating an AI model for navigating to a given goal, for example, a robot navigating to a specific location in a park to deliver a package, we want this model to recognize the robot and any obstacle in its way, but not the changes in its surrounding environment that occur independently of the agent, which we define as\u202fexogenous noise<\/em>.<\/p>\n\n\n\n

Although reinforcement learning (RL) has proven to be a successful paradigm for training AI models in navigation tasks, often used in gaming, existing RL methods are not yet robust enough to handle exogenous noise. While they may be able to heuristically solve certain problems, such as helping a robot navigate to a specific destination in a particular environment, there is no guarantee that they can solve problems in environments they have not seen.<\/p>\n\n\n\n

In this post, we introduce Path Predictive Elimination (PPE), the first RL algorithm that can solve the problem of exogenous noise with a mathematical guarantee. Specifically, for any problem that satisfies certain assumptions, the algorithm succeeds in solving the problem using a small number of episodes. We discuss this algorithm in detail in our paper, \u201cProvable RL with Exogenous Distractors via Multistep Inverse Dynamics<\/a>.”<\/p>\n\n\n\n

\"A
Figure 1: A robot walking in a park to a specific destination. The environment has many sources of exogenous noise, such as people walking in the background as their reflections appear on the water and ducks gliding along the surface of the pond.<\/figcaption><\/figure>\n\n\n\n

Real-world RL and exogenous noise<\/h2>\n\n\n\n

To understand how PPE works, it\u2019s important to first discuss how a real-world RL agent (the decision-maker) operates. Agents have an action space with \\(A\\) number of actions and receive information about the world in the form of an observation. In our example, the robot is the agent, and its action space contains four actions: a step forward, backward, left, or right. <\/p>\n\n\n\n

After an agent takes a single action, it gets a new observation\u2014that is, it receives more information about its environment\u2014along with a reward. If the robot observes the park through a camera, the observation takes the form of an image. When an agent has a task to solve, such as reaching a specific destination, it must take a sequence of actions, each resulting in a reward. Its goal is to maximize the sum of rewards. When the robot takes a step forward, the camera generates a new observation of the park, and it receives a reward for this action. It may get a reward of 1 for the first action that takes it toward its goal and 0 otherwise.\u202f<\/p>\n\n\n\n

Key challenges in real-world RL include how to handle complex observations and very large observation spaces. In our example, the robot in the park will have to work with an image that contains relevant information, such as the position of the destination, but this information is not directly accessible due to the exogenous noise and camera-generated image noise (opens in new tab)<\/span><\/a> in the observation.<\/p>\n\n\n\n

An image can be in a 500 x 500 x 3 pixel space, where each pixel takes 255 values. This would give us 255500 x 500 x 3<\/sup> the number of different images which is an extremely large number of possibilities. However, the environment is much simpler to describe than this number suggests. This means the observation in an RL environment is generated from a much more compact but hidden endogenous state<\/em>. In our park example, the endogenous state contains the position of the agent, the destination, and any obstacles around the agent.<\/p>\n\n\n\n

In our paper<\/a>, we assume that the endogenous state dynamics are near-deterministic. That is, taking a fixed action in an endogenous state always leads to the same next endogenous state in most cases. We also require that it is possible to extract the endogenous state from an observation. However, we make no assumptions about dynamics of exogenous noise or how observations are generated.<\/p>\n\n\n\n

\n
Read the paper<\/a><\/div>\n<\/div>\n\n\n\n
<\/div>\n\n\n\n

Most existing RL algorithms are either unable to solve problems containing complex observations or lack a mathematical guarantee for working on new, untried problems. This guarantee is desirable because the cost of failure in the real world can be potentially high. Many existing algorithms require an impractically large amount of data to succeed, requiring the agent to perform a large number of actions before it solves the task.<\/p>\n\n\n\n

PPE takes an approach called hidden state decoding<\/em>, where the agent learns a type of ML model called a decoder to extract the hidden endogenous state from an observation. It does this in a self-supervised manner, meaning it does not require a human to provide it with labels. For example, PPE can learn a decoder to extract the robot and any obstacle\u2019s position in the park. PPE is the first provable algorithm that can extract the endogenous state and use it to perform RL efficiently.<\/p>\n\n\n\n\t

\n\t\t\n\n\t\n\t
\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"Digital\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

GigaPath: Whole-Slide Foundation Model for Digital Pathology<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

Digital pathology helps decode tumor microenvironments for precision immunotherapy. In joint work with Providence and UW, we\u2019re sharing Prov-GigaPath, the first whole-slide pathology foundation model, for advancing clinical research.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tRead more\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n

Path Prediction and Elimination: An RL algorithm that is robust to exogenous noise<\/h2>\n\n\n\n

PPE is simple to implement and is fast to run. It works by learning a small set of paths that can take the agent to all possible endogenous states. The agent can technically consider all possible paths of length \\(h\\), enabling it to visit every endogenous state. However, as there are \\(A^h\\) possible paths of length \\(h\\), the number of paths will overwhelm the agent as \\(h\\) increases. The more paths the agent has to work with, the more data it needs to solve a given task. Ideally, if there are \\(S\\) number of endogenous states, we need just \\(S\\) number of paths, with only one unique path going to each endogenous state. PPE works by eliminating redundant paths that visit the same endogenous state by solving a novel self-supervised classification task.<\/p>\n\n\n\n

PPE is similar in structure to the breadth-first search (opens in new tab)<\/span><\/a> algorithm in that it runs a for-loop (opens in new tab)<\/span><\/a>, where, in iteration \\(h\\) of the loop, the agent learns to visit all endogenous states that can be reached by taking \\(h\\) actions. At the start of iteration, the agent maintains a list of paths of length \\(h\\). This list has a path to visit every endogenous state that\u2019s reachable after taking \\(h\\) actions. However, this list may also contain redundant paths, i.e., multiple paths that reach the same endogenous state. When this list is simply all paths of length 1, it corresponds to every action in the agent\u2019s action space.<\/p>\n\n\n\n

The top of Figure 2 shows agent\u2019s initial list of paths, which contains at least three paths: \\( \\pi_1\\), \\(\\pi_2\\), and \\(\\pi_3\\). The first two paths reach the same destination, denoted by the endogenous state \\(s_1\\). In contrast, the last path \\(\\pi_3\\) reaches a different endogenous state \\(s_2\\). Figure 2 shows a sampled observation (or image) for each endogenous state. <\/p>\n\n\n\n

Because PPE wants to learn a small set of paths to visit all endogenous states, it seeks to eliminate the redundant paths by collecting a dataset of observations coupled with the path that was followed to observe them. In Figure 2, both \\(\\pi_1\\) or \\(\\pi_2\\) reach the same endogenous state, so one of them can be eliminated. This is done by randomly selecting a path in its list, following this path to the end, and saving the last observation. For example, our dataset can contain a tuple (\\(\\pi_1, x\\)) where \\(\\pi_1\\) is the policy in our list and \\(x\\) is the image in top-right of Figure 2. PPE collects a dataset of many such tuples.<\/p>\n\n\n\n

\"This
Figure 2: Execution of the PPE algorithm at a given for-loop iteration. For each iteration, PPE starts with a list of paths to visit endogenous states and then eliminates redundant paths\u2014those that visit an endogenous state that can also be reached by an existing path. The extra path, \\(\\pi_2\\) is eliminated because it reaches an endogenous state that can also be reached by an existing path \\(\\pi_1\\).<\/figcaption><\/figure>\n\n\n\n

PPE then solves a multiclass classification problem to predict the index of the path from the last observation. The index of a path is computed with respect to the original list. This classification problem can be solved with any appropriate model class, such as deep neural networks, using PyTorch, TensorFlow, or a library of your choice. If two different paths, \\(\\pi_1\\) and \\(\\pi_2\\), reach the same endogenous state, the learned classifier won\u2019t be able to deterministically predict which path was used to visit observations from this state. That is, the learned classifier predicts a high probability for both paths given an observation from this endogenous state. PPE uses this confusion signal to eliminate one of these paths because both paths reach the same endogenous state. PPE also learns a decoder as a result solving the classification problem described above, which maps an observation to the index of the leftover path with the highest probability under the learned classifier.<\/p>\n\n\n\n

At the end of iteration \\(h\\) of the for-loop, PPE will have found a list of leftover paths that includes a unique path for every endogenous state that\u2019s reachable after taking \\(h\\) actions. It then expands these leftover paths to create the list for the next iteration of the for-loop. For every path that\u2019s left over, PPE creates \\(A\\) number of new paths by concatenating every action to the end of the path. The for-loop then continues with the next iteration.<\/p>\n\n\n\n

Note that the above steps of PPE can be computed even in the absence of rewards. The output of these steps, namely the decoder and the learned leftover paths, can be cached and used to optimize any reward functions provided later. We discuss various strategies to optimize any given reward function in our paper<\/a>, including both model-free and model-based approaches.<\/p>\n\n\n\n

Proof, experiment, and code<\/h2>\n\n\n\n

The paper<\/a> also provides a mathematical proof that PPE efficiently solves a large class of RL problems. Using a small amount of data, it can accurately explore, find a policy that achieves maximum sum of rewards, recover a decoder that maps the observation to its hidden endogenous state, and recover the dynamics of the endogenous state with a high probability. We describe various experiments where PPE successfully performs these tasks in line with its mathematical guarantee and outperforms various prior methods.<\/p>\n\n\n\n

This is illustrated in Figure 3. It depicts a visual grid-world where the agent\u2019s goal is to navigate to the slice of pizza on the other side of the pond, populated by two ducks that move independently of agent\u2019s actions and are the source of exogenous noise. The endogenous state will consist of the position of the agent. The figure shows what PPE is expected to do in this task. It will gradually learn longer paths that reach various endogenous states in the environment. It will also learn a decoder and use it to extract the dynamics of the latent endogenous state, shown on the right.<\/p>\n\n\n\n

\"This
Figure 3: The area on the left shows a visual grid-world navigation task where an agent is trying to reach a slice of pizza. The motion of the ducks is a source of exogenous noise. PPE allows the agent to learn a small set of paths to visit every endogenous state. On the right, PPE also learns a decoder and uses it to extract the dynamics of the latent endogenous state. The circles denote an endogenous state and the arrows denote possible ways to navigate from one endogenous state to another.<\/figcaption><\/figure>\n\n\n\n

The road ahead<\/h2>\n\n\n\n

While PPE is the first RL algorithm that offers a mathematical guarantee in the presence of exogenous noise, there is still work to do before we can solve every RL problem that includes exogenous noise. Some of the unanswered questions that we are pursuing include:<\/p>\n\n\n\n

  1. How can we eliminate the assumption that PPE makes, that latent endogenous state dynamics are near-deterministic?<\/li>
  2. Can we extend PPE to work in nonepisodic settings, where the agent generates a single long episode?<\/li>
  3. How does PPE perform on real-world problems?<\/li>
  4. Can we make PPE a truly online algorithm, eliminating the need to collect large datasets before it improves?<\/li><\/ol>\n\n\n\n

    RL algorithms hold great promise for improving applications in a diverse range of fields, from robotics, gaming, and software debugging, to healthcare. However, exogenous noise presents a serious challenge in unlocking the full potential of RL agents in the real world. We\u2019re hopeful that PPE will motivate further research in RL in the presence of exogenous noise.<\/p>\n","protected":false},"excerpt":{"rendered":"

    Picture a person walking in a park by a pond. The surrounding environment contains a number of moving objects that change the quality of the environment: clouds moving to hide the sun, altering the quality of light; ducks gliding across the pond, causing its surface to ripple; people walking along a path, their images reflecting […]<\/p>\n","protected":false},"author":37583,"featured_media":838021,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-837610","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199571],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[831184],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"\"A","byline":"Dipendra Misra and Yonathan Efroni","formattedDate":"April 25, 2022","formattedExcerpt":"Picture a person walking in a park by a pond. The surrounding environment contains a number of moving objects that change the quality of the environment: clouds moving to hide the sun, altering the quality of light; ducks gliding across the pond, causing its surface…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/837610"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37583"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=837610"}],"version-history":[{"count":23,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/837610\/revisions"}],"predecessor-version":[{"id":870567,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/837610\/revisions\/870567"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/838021"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=837610"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=837610"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=837610"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=837610"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=837610"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=837610"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=837610"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=837610"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=837610"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=837610"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=837610"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}