{"id":882270,"date":"2022-09-30T13:22:31","date_gmt":"2022-09-30T20:22:31","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=882270"},"modified":"2022-11-23T15:20:39","modified_gmt":"2022-11-23T23:20:39","slug":"3db-debugging-computer-vision-models-through-simulation","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/3db-debugging-computer-vision-models-through-simulation\/","title":{"rendered":"3DB: Debugging Computer Vision Models through Simulation"},"content":{"rendered":"\n

Paper  (opens in new tab)<\/span><\/a>  \/  Code (opens in new tab)<\/span><\/a>  \/  Demo (opens in new tab)<\/span><\/a> \/  Docs (opens in new tab)<\/span><\/a><\/p>\n\n\n\n

Modern machine learning models are known to fail in ways that aren\u2019t anticipated during training these models. These include all sorts of distribution shifts that the model might experience during deployment in complex real-life settings. In the context of computer vision for example, it has been shown by several works that models suffer in the face of small rotations (opens in new tab)<\/span><\/a>, common corruptions (opens in new tab)<\/span><\/a> (such as snow or fog), and changes to the data collection pipeline (opens in new tab)<\/span><\/a>. While such brittleness is widespread, it is often hard to understand its root causes, or even to characterize the precise situations in which this unintended behavior arises.<\/p>\n\n\n\n

\"graphical<\/figure>\n\n\n\n

How do we then comprehensively diagnose model failure modes? One way to do this is to deploy our models in the real-world and eventually collect some real-world failure cases. But clearly the stakes are often too high to simply do this. There has been a line of work in computer vision research that is focused on identifying systematic sources of model failure: which include examining the effects of unfamiliar object orientations (opens in new tab)<\/span><\/a>, misleading backgrounds (opens in new tab)<\/span><\/a>, or conflicts between texture and shape (opens in new tab)<\/span><\/a>. Such analyses have revealed patterns of performance degradation in vision models – still, each such analysis requires its own set of (often complex) tools, time, and effort. Our question is thus: can we support reliable discovery of model failures in a systematic, automated, and unified way?<\/em><\/p>\n\n\n\n

To address this, in collaboration with researchers at MIT, we introduce 3Debugger (3DB)<\/strong>: a framework for automatically identifying and analyzing the failure modes of computer vision models. This framework makes use of a 3D simulator to render images of near-realistic scenes that can be fed into any computer vision system. Users can specify a set of extendable and composable transformations within the scene, such as pose changes, background changes, or camera effects, which we refer to as \u201ccontrols\u201d. We show examples of such controls in Fig. 1.<\/p>\n\n\n\n

\"Examples<\/figure>\n\n\n\n

Fig. 1: Examples of \u201ccontrols\u201d in 3DB, using Blender as the 3D simulator.<\/em><\/p>\n\n\n\n

Once the user has specified a set of controls of interest, the system performs a guided search, evaluation, and aggregation derived from these transformations. 3DB achieves this by instantiating and rendering a myriad of object configurations according to the transformations, records the behavior of the model on each rendered scene, and finally presents the user with an interactive, user-friendly summary of the model’s performance and vulnerabilities. 3DB is general enough to enable users to, with little-to-no effort, re-discover insights from prior work on robustness to pose, background, and texture, among others. Users can even compose these transformations to understand their interplay, while still being able to disentangle their individual effects, or easily write their own if required. An overview of the workflow of 3DB is shown in Fig. 2.<\/p>\n\n\n\n

\"graphical<\/figure>\n\n\n\n

Fig.2: The workflow of 3DB.<\/em><\/p>\n\n\n\n

As an example, let us try to evaluate how robust the standard ImageNet-pretrained ResNet-18 model is at classifying a coffee mug. The highly configurable nature of 3DB allows one to set up the model of interest, the renderer, as well as the transformations of interest through a YAML configuration file. 3DB reads this configuration file and initializes the renderer and the model accordingly. Once initialized, 3DB renders several synthetic images according to the desired controls, performs inference on these images and displays the results in a web dashboard, mapping the changing parameters to success\/failure.<\/p>\n\n\n\n

\"\"<\/figure>\n\n\n\n

Some interesting findings from 3DB for this coffee mug example are:<\/p>\n\n\n\n

  1. Complex backgrounds result in bad classification performance.<\/li>
  2. ImageNet pretrained models are sensitive to texture. <\/li>
  3. Classification accuracy changes based on which liquid is inside the mug. <\/li><\/ol>\n\n\n\n

    3DB is also capable of finding failure modes (e.g. due to extreme viewpoints and poses) in simulation that transfer to the real world. Fig. 3 shows the agreement, in terms of model correctness, between the model predictions within 3DB and its predictions in the real-world. For each object, we selected five configurations that 3DB found to be correctly classified in simulation, and five misclassified; we recreated and deployed the model on each scene in the physical world. The positive (resp., negative) predictive value is the rate at which correctly (resp.\u00a0 incorrectly) classified examples in simulation were also correctly (resp., incorrectly) classified in the physical world.<\/p>\n\n\n\n

    \"\"<\/figure>\n\n\n\n

    Overall, 3DB is a scalable, extendable, and unified framework for diagnosing failure modes in vision models using high-fidelity rendering. We refer the reader for our paper to learn more about the use cases of 3DB, where we demonstrated the efficacy of 3DB by applying it to a variety of use cases including disentangling the effects of different types of brittleness, discovering model biases, analyzing specific model decisions in depth, and identifying vulnerabilities and worst-case environmental configurations. We are releasing 3DB as a library alongside a set of example analyses (opens in new tab)<\/span><\/a>, guides (opens in new tab)<\/span><\/a> and documentation (opens in new tab)<\/span><\/a>. 3DB is designed with extensibility as a priority; we encourage the community to build upon the framework by adding more controls and policies that provide new insights into the vulnerabilities of vision models. <\/p>\n\n\n\n


    \n\n\n\n

    This work was a collaborative effort between Microsoft Research and MIT. Researchers involved in this work were Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander M\u0105dry<\/em>. <\/p>\n","protected":false},"excerpt":{"rendered":"

    Paper  (opens in new tab)  \/  Code (opens in new tab)  \/  Demo (opens in new tab) \/  Docs (opens in new tab) Modern machine learning models are known to fail in ways that aren\u2019t anticipated during training these models. These include all sorts of distribution shifts that the model might experience during deployment in complex […]<\/p>\n","protected":false},"author":39180,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":867219,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-882270","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":867219,"type":"group"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/882270"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/39180"}],"version-history":[{"count":4,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/882270\/revisions"}],"predecessor-version":[{"id":901356,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/882270\/revisions\/901356"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=882270"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=882270"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=882270"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=882270"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}