{"id":982158,"date":"2023-11-08T12:39:02","date_gmt":"2023-11-08T20:39:02","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=982158"},"modified":"2024-08-30T10:59:53","modified_gmt":"2024-08-30T17:59:53","slug":"finding-adversarial-inputs-for-heuristics","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/finding-adversarial-inputs-for-heuristics\/","title":{"rendered":"MetaOpt: A Comprehensive Heuristic Analysis and Optimization Tool"},"content":{"rendered":"\n
\n
\n
\n\t
\n\t\t
\n\t\t\t\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n
MetaOpt<\/h1>\n\n\n\n
Towards efficient heuristic design where we achieve quantifiable and confident performance<\/h2>\n\n\n\n
<\/p>\n\n\n\n
We use heuristics all the time across many systems including those that are critical to production services. Production systems use heuristics because they are faster or scale better than their optimal counterparts. But practitioners often don\u2019t know the performance gap between the heuristic and the optimal, or another heuristic in realistic scenarios. We present MetaOpt<\/strong>, a system that helps analyze heuristics.<\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n\n\n
We use heuristics all the time across many systems, including those that are critical to production services. Production systems use heuristics because they are faster or scale better than their optimal counterparts. But practitioners often don\u2019t know the performance gap between the heuristic and the optimal, or another heuristic in realistic scenarios.<\/p>\n\n\n\n
In many cases, theoreticians also have been unable to prove tight bounds for such heuristics: the first fit decreasing heuristic for vector bin packing is one such example — theoreticians have studied it for over 30 years and have not yet found a tight bound that worked across all problem sizes<\/strong> (teaser: our tool, MetaOpt, enabled us to prove a tighter bound and design a constructive proof to do so).<\/p>\n\n\n\n
A heuristic analyzer not only helps practitioners understand the performance gaps for their heuristics, but also helps them find practical situations where their heuristics underperform and deploy workarounds to mitigate their impact. They can analyze the decisions their heuristics make that cause these performance gaps and modify them to prevent problematic cases. They can also show what happens to the performance gap if we combine multiple heuristics to derive a better one. But the use cases for such a tool don\u2019t stop here.<\/strong><\/p>\n\n\n\n
\n
MetaOpt pioneers an area of research on scalable, general, and easy-to-use<\/em><\/strong> tools that enable users to analyze and explain the performance differences across competing algorithms and to improve these algorithms before they are deployed in high-stakes situations.<\/p>\n<\/blockquote>\n\n\n\n
We present, MetaOpt, a system that helps us analyze heuristics we have in production. MetaOpt enables the use cases mentioned earlier and much more. We use it to analyze some of the popular machine learning models that we have in production at Microsoft; we have used it as a \u201ca helper for theorem proving\u201d<\/strong> to prove tighter performance bounds for heuristics in domains such as vector bin packing and packet scheduling; we have used it to analyze probabilistic heuristics as well; and we are now investigating how to use it for capacity planning in our WAN.<\/p>\n\n\n\n
MetaOpt pioneers an area of research on scalable, general, and easy-to-use<\/em> tools that enable users to analyze and explain the performance differences between competing algorithms and to help operators improve their heuristics before they deploy to production. While others have studied individual heuristics (e.g., congestion control and network traffic), none are general, provably optimal, or easy to use. MetaOpt is the first general-purpose<\/em> and scalable<\/em> tool that enables users to analyze a broad class of heuristics through easy-to-use abstractions<\/em> that apply to a broad range of practical heuristics. <\/p>\n\n\n\n
\n
We believe MetaOpt can have significant benefits as productivity booster<\/strong> for those designing\/studying heuristics; as a risk-analysis-engine<\/strong> for evaluating heuristics before we deploy them in production; and as a tool for explainable-AI and active learning.<\/strong><\/p>\n<\/blockquote>\n\n\n\n
How do people use MetaOpt?<\/h2>\n\n\n\n
To analyze their heuristics, users specify the heuristic and the optimal (or another heuristic) as inputs to MetaOpt, and MetaOpt automatically encodes these efficiently as inputs to a solver to find performance gaps and adversarial inputs that cause them. Not all users know optimization theory; we have designed a higher-level abstraction for MetaOpt that allows users to input their heuristics using a few simple building blocks. Using this abstraction, we are also able to search through decisions the heuristic makes that cause it to underperform or properties of the inputs that trigger the heuristic to make bad decisions<\/strong>.<\/p>\n\n\n\n