{"id":882270,"date":"2022-09-30T13:22:31","date_gmt":"2022-09-30T20:22:31","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=882270"},"modified":"2022-11-23T15:20:39","modified_gmt":"2022-11-23T23:20:39","slug":"3db-debugging-computer-vision-models-through-simulation","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/3db-debugging-computer-vision-models-through-simulation\/","title":{"rendered":"3DB: Debugging Computer Vision Models through Simulation"},"content":{"rendered":"\n
Paper (opens in new tab)<\/span><\/a> \/ Code (opens in new tab)<\/span><\/a> \/ Demo (opens in new tab)<\/span><\/a> \/ Docs (opens in new tab)<\/span><\/a><\/p>\n\n\n\n Modern machine learning models are known to fail in ways that aren\u2019t anticipated during training these models. These include all sorts of distribution shifts that the model might experience during deployment in complex real-life settings. In the context of computer vision for example, it has been shown by several works that models suffer in the face of small rotations (opens in new tab)<\/span><\/a>, common corruptions (opens in new tab)<\/span><\/a> (such as snow or fog), and changes to the data collection pipeline (opens in new tab)<\/span><\/a>. While such brittleness is widespread, it is often hard to understand its root causes, or even to characterize the precise situations in which this unintended behavior arises.<\/p>\n\n\n\n