{"id":729247,"date":"2021-03-03T18:01:50","date_gmt":"2021-03-04T02:01:50","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=729247"},"modified":"2021-03-03T19:40:18","modified_gmt":"2021-03-04T03:40:18","slug":"alerting-in-microsofts-experimentation-platform-exp","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/alerting-in-microsofts-experimentation-platform-exp\/","title":{"rendered":"Alerting in Microsoft\u2019s Experimentation Platform (ExP)"},"content":{"rendered":"

\"\"
\nAt Microsoft, we continuously improve products by developing new features for them. To facilitate data-driven decision-making in software development, product teams across Microsoft run tens of thousands of A\/B tests each year. While the primary purpose of A\/B testing is to rigorously evaluate customer satisfaction with new features and experiences, it also helps uncover anomalies, bugs, performance degradation, and user dissatisfaction. To catch these issues early, we rely on alerts. Alerts are proactive notifications to experimenters when something unexpected has occurred in an A\/B test. For example, Bing runs several A\/B tests every year, which raises hundreds of alerts (ref. Fig 1). These alerts have helped identify and rectify significant issues.<\/p>\n

\"Screenshot

Fig 1. Screenshot of an example alert<\/p><\/div>\n

In this blog post, we highlight the importance of timely and trustworthy alerts, illustrate the alerting methodologies, and summarize the typical alerting mechanisms and workflows on our experimentation platform.<\/p>\n

Why are alerts important?<\/strong><\/h3>\n

In 2020, one of the product features had a minor bug wherein a subset of users got mistakenly assigned the duplicate Device IDs (randomization unit for A\/B tests). The bug got resolved, and these users were then assigned new (and different) IDs.
\nA few days later, one of the A\/B tests running on the product resulted in a Sample Ratio Mismatch (SRM<\/a>) alert between treatment and control [1]. The team investigated and discovered that the treatment variant contained a larger-than-expected number of users because it erroneously read the old corrupt Device IDs. On the other hand, the control variant correctly read the new correct Device Ids and included the expected number of users. This resulted in a mismatch in user (device) count between the treatment and control variants. The SRM alert helped the experimenters identify issues with the set-up of an A\/B test and the usage of corrupt Ids promptly, fix these issues, and restart the test quickly.<\/p>\n

As indicated above, alerts help experimenters ensure their A\/B tests’ success by timely detecting egregious changes that may have occurred in the feature that is being tested. At Microsoft, alerts fired during A\/B tests constantly help experimenters catch issues before negatively impacting user experience or user satisfaction. Issues ranging from incorrect \u2018no-result\u2019 search queries to software crashes in productivity tools and other failures get timely attention and notification. In the information technology industry, similar practices are adopted by other companies as well. For example, Criteo has an automated system to raise alerts on key metrics such as clicks and click revenues [2] and Walmart Lab ran anomaly detection and alerting on a number of visitors [3].<\/p>\n

In the next section, we will share the main alert types at Microsoft ExP and present the alerting workflow in our system.<\/p>\n

What are the main alerts used by Microsoft ExP?<\/strong><\/h3>\n
Sample Ratio Mismatch (SRM)<\/strong><\/h5>\n

SRM is an important issue in A\/B testing [4] (read more here<\/a>) and therefore, we incorporate it in our alerting system. Before each experiment, we configure the ratio of counts of randomization units (e.g., users, devices) between treatment and control groups, typically 1:1 split. During the experiment, we compute the actual sample ratio and the corresponding p-value using the chi-squared test. An alert gets fired whenever the observed ratio has a statistically significant deviation from the configured value.<\/p>\n

Metric Out of Range <\/strong><\/h5>\n

Roughly speaking, the metric-out-of-range alert fires when a metric (e.g., click-through rate) is observed to be radically different in magnitude in treatment versus control, with a low-enough p-value to be statistically significant [5]. To define this alert, we slightly modify the classic equivalence test procedure so that the null hypothesis is that the metric movement is within a pre-specified range (e.g., between \u00b11%) and the alternative is that the movement is out of range. In other words, we examine if the metric movement is within equivalence bounds (i.e., \u201callowed\u201d metric movement range) \\(\\begin{equation}\\left[b_{A L}, b_{A U}\\right]\\end{equation}\\), and the corresponding null and alternative hypotheses are:<\/p>\n

\n\\(\\begin{equation} H_{0}: b_{A L} \\leq \\mu_{T}-\\mu_{C} \\leq b_{A U}\\end{equation}\\)
\n
\n\\(\\begin{equation}H_{1}: \\mu_{T}-\\mu_{C} \\lt b_{A L} \\quad \\text{or} \\quad \\mu_{T}-\\mu_{C} \\gt b_{A U}\\end{equation} \\)
\n \n<\/div>\n

Intuitively, if the metric moves drastically and out of the \u201cnormal\u201d range, we reject \\(\\begin{equation}H_{0}\\end{equation}\\) and fire an alert. In practice, we rely heavily on domain knowledge, heuristics, and historical data to specify the equivalence bounds. For example, as pointed out in [5], we may not alert on two-millisecond page load time degradations, even if they are highly statistically significant. On the other hand, for key reliability metrics for system health, we might consider alerting even on 0.1% change.<\/p>\n

We conduct two one-sided tests (TOST) [6] to compute the corresponding p-values (ref. Fig. 2). Alerts will be fired if p-values fall below certain thresholds. \\(\\begin{equation}H_{0}\\end{equation}\\) can be decomposed into two one-sided hypotheses:<\/p>\n

\n\\(
\n\\begin{equation}H_{0 L}: \\mu_{T}-\\mu_{C} \\geq b_{A L} \\quad \\text { and } \\quad H_{0 U}: \\mu_{T}-\\mu_{C} \\leq b_{A U}\\end{equation}
\n\\)
\n \n<\/div>\n

First, we calculate two p-values for the two corresponding one-sided tests:<\/p>\n

\n\\(
\n\\begin{equation}p_{A L}= \\text{Pr} \\left(\\frac{\\left(m_{T}-m_{C}\\right)-b_{A L}}{S} \\lt z_{\\alpha} \\mid \\mu_{T}-\\mu_{C}=b_{A L}\\right)
\n\\end{equation}
\n\\)
\n
\n\\(
\n\\begin{equation}
\np_{A U}={Pr}\\left(\\frac{\\left(m_{T}-m_{C}\\right)-b_{A U}}{S} \\gt z_{1-\\alpha} \\mid \\mu_{T}-\\mu_{C}=b_{A U}\\right)
\n\\end{equation}
\n\\)
\n \n<\/div>\n

Second, we reject \\(\\begin{equation}H_{0}\\end{equation}\\) when we reject \\(\\begin{equation}H_{0L}\\end{equation}\\) or \\(\\begin{equation}H_{0R}\\end{equation}\\), i.e., when \\(\\begin{equation}p=\\min \\left(p_{A L}, p_{A U}\\right)\\end{equation}\\) is small.<\/p>\n

\"\"


Fig 2. Two-One-Sided-Test (TOST)<\/p><\/div>\n

Remark:<\/strong> In practice, we often repeat the above procedure for relative movements and obtain.<\/p>\n

\n\\(\\begin{equation}
\np_{R L}={Pr}\\left(\\frac{\\left(m_{T}-m_{C}\\right)-b_{R L} \\mu_{C}}{S} \\lt z_{\\alpha} \\mid \\mu_{T}-\\mu_{C}=b_{R L} \\mu_{C}\\right)\\end{equation}\\)
\n
\n\\(
\n\\begin{equation}
\np_{R U}={Pr}\\left(\\frac{\\left(m_{T}-m_{C}\\right)-b_{R U} \\mu_{C}}{s} \\gt z_{1-\\alpha} \\mid \\mu_{T}-\\mu_{C}=b_{R U} \\mu_{C}\\right)
\n\\end{equation}
\n\\)
\n \n<\/div>\n

And consider the \u201caggregate\u201d p-value as \\(\\begin{equation}\\min \\left(p_{A L}, p_{A U}, p_{R L}, p_{R U}\\right)\\end{equation}\\).<\/p>\n

For an explanation of terms (\\(
\n\\begin{equation}
\n\\mu_{T},\\mu_{C},m_{T},m_{C},s_{T}^{2}, s_{C}^{2},N_{T},N_{C},s,b_{AL},b_{AU},b_{RL},b_{RU},p_{AL},p_{AU},p_{RL},p_{RU}
\n\\end{equation}
\n\\)), refer to the
glossary<\/a> at the end. <\/em><\/p>\n

Here is an example (ref. Fig. 3) of a metric-out-of-range alert configured on the \u201cPageClickRate\u201d metric on the Bing metric set. An alert gets fired (the experimenter gets notified) if this metric moves significantly in the negative direction by 5% or more.<\/p>\n

\"text\"

Fig 3. Example of a metric-out-of-range alert<\/p><\/div>\n

P-value adjustment in alerting<\/strong><\/h5>\n

As pointed out in [5], \u201cthe na\u00efve approach to alerting on any statistically significant negative metric changes will lead to an unacceptable number of false alerts and thus make the entire alerting system useless\u201d, emphasizing the need for p-value adjustments for multiple testing. This is particularly important in practice because:<\/p>\n

\u2022 There can be multiple A\/B tests simultaneously running on the same product line.
\n\u2022 There can be multiple analyses (e.g., partial-day, 1-day et al.) for an A\/B test.
\n\u2022 There can be hundreds or even thousands of metrics in the analysis.<\/p>\n

Depending on the user scenario, different adjustment methods can be chosen. Their technical details are out of the scope of this blog. For a recent review, see [7]. Common methods include the O\u2019Brien & Fleming procedure [8] and the Benjamini & Hochberg false discovery rate control (for independent and dependent cases) [9] [10]. At Microsoft ExP we use the latter.<\/p>\n

How does one get notified? <\/strong><\/h3>\n

The experiment owners are usually notified of an alert on their experiment. The notification can be an email with all the details regarding the alert along with the meta-data for that experiment. It may also contain a link for the experimenters to review and resolve the alerts or suppress specific alerts in the future. This functionality is particularly useful when the system raises a false-positive or an alert under a known scenario too often. Alert notifications and the time allotted to investigate\/fix it are also largely dependent on the severity of an alert.<\/p>\n

A priority-based system to categorize alerts can be followed, where warnings raised on key metrics should be prioritized higher than the warnings raised on other metrics. At Microsoft ExP, we categorize alerts in the following way:<\/p>\n

\u2022 P0: A P0 alert indicates that something has gone catastrophically wrong (most severe) and needs immediate attention.
\n\u2022 P1: A P1 alert indicates that something quite serious is happening (severe), and experimenters should investigate it as soon as possible.
\n\u2022 P2: A P2 alert indicates that something potentially wrong is happening (less severe), and experimenters should investigate it.<\/p>\n

It is the experiment owners’ responsibility to determine an appropriate course of action on alerts based on their priority. When it comes to notifying experimenters, it can depend on the severity of an alert. For less severe alerts like P2 alerts, an email notification can be sufficient. In scenarios of more severe alerts like P0 and P1, it might also be beneficial to configure automated responses along with email notifications. This would act as a safety-net in situations when the experimenters missed taking timely action.<\/p>\n

One way of such an automated response is the auto-shutdown of experiments. At Microsoft ExP, we enable auto-shutdown based on alerts for some of our experiments. This is done in scenarios where actions on such alerts are time-sensitive, and the downside of not shutting the experiment can potentially have unwanted ramifications in the form of a sub-optimal user experience.<\/p>\n

Summary<\/strong><\/h3>\n

At Microsoft ExP we strive to create a platform that enables different product teams at Microsoft to run and analyze trustworthy A\/B tests. A key component of the experimentation platform is a robust alerting mechanism, which keeps experimenters informed of bugs, anomalies, and surprising results early on. In this blog post, we summarized how alerting is incorporated in our experimentation platform, including the most important alerts, how they are defined and analyzed, and the high-level alerting workflow.<\/p>\n

 
\n-Ankita Agrawal and Jiannan Lu, Microsoft Experimentation Platform<\/strong><\/p>\n

 
\n<\/p>\n

Glossary<\/strong><\/h3>\n
    \n
  1. Population Mean \\(\\begin{equation}\\left(\\mu_{T}, \\mu_{C}\\right)\\end{equation}\\)<\/strong>: This is the average value for a metric.\n<\/li>\n
  2. Sample Mean \\(\\begin{equation}\\left(\\boldsymbol{m}_{T}, \\boldsymbol{m}_{C}\\right)\\end{equation}\\)<\/strong>: This is the average value of the metric in interest as obtained from the user telemetry after running the experiment for some days.\n<\/li>\n
  3. Absolute Sample Delta \\(\\begin{equation}\\left(m_{T} – m_{C}\\right)\\end{equation}\\)<\/strong>: The difference in sample means between treatment and control group.\n<\/li>\n
  4. \nSample standard deviation \\(\\begin{equation}\\left(s_{T}^{2}, s_{C}^{2}\\right)\\end{equation}\\)<\/strong>: This is the variance in metric for treatment and control group.\n<\/li>\n
  5. Sample Count \\(\\begin{equation}\\left(N_{T}, N_{C}\\right)\\end{equation}\\)<\/strong>: This is the total user count obtained from user telemetry.\n<\/li>\n
  6. Sample standard deviation (s) of absolute sample delta <\/strong> \\(\\begin{equation}\\left(m_{T} – m_{C}\\right)\\end{equation}\\): \\(\\begin{equation}s=\\sqrt{\\frac{s_{T}^{2}}{N_{T}}+\\frac{s_{C}^{2}}{N_{C}}}\\end{equation}\\)\n<\/li>\n
  7. Equivalence bounds \\(\\begin{equation}\\left(b_{A L}, b_{A U}, b_{R L}, b_{R U}\\right)\\end{equation}\\)<\/strong>: This is the bound set by the experimenter. This can be understood as the \u201cacceptable\u201d range. Anything outside this range should fire an alert and that\u2019s the hypothesis we have set. The absolute and relative bounds are denoted by:\n
      \n
    1. Absolute Lower Bound: \\(\\begin{equation}b_{A L}\\end{equation}\\)<\/li>\n
    2. Absolute Upper Bound: \\(\\begin{equation}b_{A U}\\end{equation}\\)<\/li>\n
    3. Relative Lower Bound: \\(\\begin{equation}b_{R L}\\end{equation}\\)<\/li>\n
    4. Relative Upper Bound: \\(\\begin{equation}b_{R U}\\end{equation}\\)<\/li>\n<\/ol>\n<\/li>\n
    5. Test-Statistic \\(\\begin{equation}\\left(t_{A L}, t_{A U}, t_{R L}, t_{R U}\\right)\\end{equation}\\) <\/strong>: A test statistic is a random variable that is calculated from sample data. It measures the degree of agreement between a sample of data and the null hypothesis. Test statistic is denoted by:\n
        \n
      1. Absolute Lower \\(\\begin{equation}\\boldsymbol{t}_{A L}=\\frac{\\left(m_{T}-m_{C}\\right)-b_{A L}}{s}\\end{equation}\\)<\/li>\n
      2. Absolute Upper \\(\\begin{equation}\\boldsymbol{t}_{A U}=\\frac{\\left(m_{T}-m_{C}\\right)-b_{A U}}{s}\\end{equation}\\) <\/li>\n
      3. Relative Lower \\(\\begin{equation}\\boldsymbol{t}_{R L}=\\frac{\\left(m_{T}-m_{C}\\right)-b_{R L} \\mu C}{s}\\end{equation}\\) <\/li>\n
      4. Relative Upper \\(\\begin{equation}\\boldsymbol{t}_{R U}=\\frac{\\left(m_{T}-m_{C}\\right)-b_{R U} \\mu C}{s}\\end{equation}\\) <\/li>\n<\/ol>\n<\/li>\n
      5. P-value \\(\\begin{equation}\\left(p_{A L}, p_{A U}, p_{R L}, p_{R U}\\right)\\end{equation}\\)<\/strong>: It is the probability of obtaining results as extreme as the observed results of a statistical hypothesis test, assuming that the null hypothesis is true.\n
          \n
        1. Absolute: \\(\\begin{equation}\\boldsymbol{p}_{A L}=P\\left(X \\lt t_{A L}\\right), \\boldsymbol{p}_{A U}=P\\left(X \\gt t_{A U}\\right)
          \n\\end{equation}\\)<\/li>\n
        2. Relative: \\(\\begin{equation}\\boldsymbol{p}_{R L}=P\\left(X \\lt t_{R L}\\right), \\boldsymbol{p}_{R U}=P\\left(X \\gt t_{R U}\\right)
          \n\\end{equation}\\)<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n

          References<\/strong><\/h3>\n

          [1] A. Fabijan, J. Gupchup, S. Gupta, J. Omhover, W. Qin, L. Vermeer and P. Dmitriev, “Diagnosing Sample Ratio Mismatch in Online Controlled Experiments: A Taxonomy and Rules of Thumb for Practitioners,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019.<\/p>\n

          [2] H. Hamel, “A\/B Testing fast & secure, or how to improve ads iteratively, quickly, safely,” https:\/\/medium.com\/criteo-engineering\/a-b-testing-fast-secure-or-how-to-improve-ads-iteratively-quickly-safely-ab614e0d83fc, 2019.<\/p>\n

          [3] R. Esfandani, “Monitoring and alerting for A\/B testing: Detecting problems in real time,” https:\/\/medium.com\/walmartglobaltech\/monitoring-and-alerting-for-a-b-testing-detecting-problems-in-real-time-4fe4f9b459b6, 2018.<\/p>\n

          [4] A. Fabijan, T. Blanarik, M. Caughron, K. Chen, R. Zhang, A. Gustafson, V. K. Budumuri and S. Hunt, “Diagnosing Sample Ratio Mismatch in A\/B Testing,” https:\/\/www.microsoft.com\/en-us\/research\/group\/experimentation-platform-exp\/articles\/diagnosing-sample-ratio-mismatch-in-a-b-testing\/, 2020.<\/p>\n

          [5] R. Kohavi, A. Deng, B. Frasca, T. Walker, Y. Xu and N. Pohlmann, “Online Controlled Experiments at Large Scale,” in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, 2013.<\/p>\n

          [6] D. Schuirmann, “A comparison of the Two One-Sided Tests Procedure and the Power Approach for assessing the equivalence of average bioavailability,” Journal of Pharmacokinetics and Biopharmaceutics, pp. 657-680, 1987.<\/p>\n

          [7] A. Farcomeni, “A review of modern multiple hypothesis testing, with particular attention to the false discovery proportion,” Statistical Methods in Medical Research, vol. 17, pp. 347-388, 2008.<\/p>\n

          [8] P. O’Brien and T. Fleming, “A Multiple Testing Procedure for Clinical Trials,” Biometrics, vol. 35, no. 3, pp. 549-556, 1979.<\/p>\n

          [9] Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: A practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society, Series B, vol. 57, pp. 289-300, 1995.<\/p>\n

          [10] Y. Benjamini and D. Yekutieli, “The control of the false discovery rate in multiple testing under dependency,” Annals of Statistics, vol. 29, pp. 1165-1188, 2001.<\/p>\n","protected":false},"excerpt":{"rendered":"

          At Microsoft, we continuously improve products by developing new features for them. To facilitate data-driven decision-making in software development, product teams across Microsoft run tens of thousands of A\/B tests each year. While the primary purpose of A\/B testing is to rigorously evaluate customer satisfaction with new features and experiences, it also helps uncover anomalies, bugs, performance degradation, and user dissatisfaction. To catch these issues early, we rely on alerts. Alerts are proactive notifications to experimenters when something unexpected has occurred in an A\/B test.<\/p>\n","protected":false},"author":39973,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":651963,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-729247","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":651963,"type":"group"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/729247"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/39973"}],"version-history":[{"count":212,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/729247\/revisions"}],"predecessor-version":[{"id":730801,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/729247\/revisions\/730801"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=729247"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=729247"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=729247"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=729247"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}