{"id":770860,"date":"2021-09-08T11:21:21","date_gmt":"2021-09-08T18:21:21","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=770860"},"modified":"2021-09-08T11:21:21","modified_gmt":"2021-09-08T18:21:21","slug":"a-a-b-testing-evaluating-microsoft-teams-across-build-releases","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/a-a-b-testing-evaluating-microsoft-teams-across-build-releases\/","title":{"rendered":"A\/A\u2019\/B Testing: Evaluating Microsoft Teams across Build Releases"},"content":{"rendered":"

Microsoft Teams<\/a> is a communication platform [1]. It integrates meet, chat, call and collaborate in one place. The application updates multiple times a month<\/a> [2], with additional new features and iterative improvements to existing features. To ensure high quality user experience across frequent updates, the team needs to actively monitor the quality of each new build release.<\/p>\n

A\/B testing is the gold standard to compare product variants<\/a> [3]. As the Microsoft Teams Experimentation team, we have run 100s of A\/B tests. The best practice we always follow is to test one feature or a combination of interactive features at a time<\/a> [4]. That said, A\/B testing is like a ‘unit-testing’ tool. In practice, A\/B testing is rarely used for comparison between whole builds. That’s because each build integrates multiple feature changes and it is hard to figure out which features cause regressions, if any. However, we can attempt to use A\/B testing as an integration testing tool for builds comparison.<\/p>\n

In this scenario, each user is presented with either current or next build release randomly. We evaluate if the variants generate statistically significant different results in key metrics. During our analysis, we identified two factors which introduce biases. Thus, the comparison is invalid and does not generate useful insights. In this blog post, we talk about why the issue exists. We also introduce an<\/strong> A\/A\u2019\/B testing framework which successfully enables valid builds comparison in Microsoft Teams.<\/strong><\/p>\n

\"graphical

Figure 1. Builds comparison<\/p><\/div>\n

Why is comparing builds through A\/B testing insufficient?<\/h1>\n

We started from running an A\/A test<\/a> [5] via the current A\/B testing framework. The test is a sanity check to determine if testing between builds would provide useful insights. The users in the control variant continue using the current build. The users in treatment receive a request to update to the next build, which is identical to current build except for the build version number. Without introducing treatment effect, we expected to see no differences between the results of two variants. But we observed many statistically significant metric movements. Why did those false positives show up? After investigation, we identified two factors which can introduce bias: penetration difference <\/strong>and update effect<\/strong> (reinstall-and-restart effect).<\/p>\n

Penetration difference<\/h2>\n

It takes time for the next build to penetrate across the treatment users. When an A\/B test is running, the overall traffic volumes from variants are close. But the compositions are quite different. Let’s take a look at an example in Figure 2. Assume v<\/em> is the current build version for A (the control variant), and v+1<\/em> is the next build released to B (the treatment variant). On day 0, one hundred percent of users in A and B are using build v<\/em>. Starting from day 1, users in B consist of two parts, those using build v<\/em> and those using build v+1<\/em>. The portion of latter part increases as the test runs longer. It will eventually approach 100%. But the time to reach that point will vary depending on how long build v+1<\/em> penetrates across users in B. If most users are daily users, the duration might be short to achieve a high enough portion. Otherwise, it can take weeks or even months.<\/p>\n

Impact on analysis<\/h3>\n

We have two options to perform the comparison: filtered analysis and standard analysis. Filtered analysis<\/strong> drills down to user activities for target builds. In the figure, those are the activities covered by the blue boxes in A and the grey boxes in B. Standard analysis<\/strong> includes all traffic in both variants, which compares the activities covered by the blue boxes in A with those covered by grey AND blue boxes in B.<\/p>\n

\"chart\"

Figure 2. The change of traffic composition across time. v is the current build version used in A (the control variant), and v+1 is the next build released to B (the treatment variant).<\/p><\/div>\n

Filtered analysis is a direct and intuitive way to compare builds. But selection bias<\/a> [6] exists between current-build users in A and next-build users in B. Therefore, we should not directly compare those user groups. An example is that daily users are very likely to update within 24 hours, while weekly users may take up to a week to upgrade. This means on day 1, the average next-build users will be more active than the average current-build users. Instead of measuring the outcome differences between builds, the comparison can be dominated by the characteristic differences between more engaged and less engaged users.<\/p>\n

To resolve the issue, we can use standard analysis instead. As we don\u2019t filter out any users, the average users are identical. In that way, we don\u2019t need to worry about the selection bias anymore. But as the analysis covers non-targeted users, it results in the dilution of treatment effect.<\/p>\n

To wrap up, penetration difference can introduce bias to the filtered analysis but not standard analysis<\/strong>. However, standard analysis still does not work because update effect is another key factor introducing bias.<\/p>\n

Update effect<\/h2>\n

The users must reinstall and restart Microsoft Teams application to update to a new build version. After reinstallation and restart, the application normalizes the memory usage and performance profile. For users in the control variant, memory usage accumulates since the application was launched. It can increase the memory consumed by the application. In contrast, memory usage is significantly reduced after reinstallation and restart in treatment. That difference can further lead to the secondary effects on application performance and user engagement. Therefore, the builds comparison measures not only the differences between the results of two builds, but also the impact of reinstallation and restart<\/strong>.<\/p>\n

In the A\/A test mentioned earlier, we observed statistically significant metric movements even when performing standard analysis. This indicates that the update effect<\/strong> was the main reason leading to the gap between builds.<\/p>\n

Methods considered<\/h1>\n

We proposed several methods to mitigate the impact of penetration difference and update effect. The key point is to only include users who have experienced application restart or update in the analysis<\/strong>.<\/p>\n

Triggered analysis<\/h2>\n

Triggered analysis <\/a>[7] is to drill down to activities of users who have restarted, regardless of whether or not they received the update. This alleviates the impact of the update effect, though we ignore the impact of reinstalling which plays a less essential role. Triggered analysis does not need the modification to the A\/B testing framework, but we might still encounter selection bias issue. As the treatment variant proactively sends the update request to the users, the probability of restarting the application will be higher than that in control.<\/p>\n

Forced update in control variant<\/h2>\n

Alternatively, we can induce a counterfactual reinstall and restart of the control variant. This forces the users in control variant to undergo the same process as those in treatment. The downside of this method is we need extra logic to track the counterfactual update. Otherwise, the users could get stuck in an infinite reinstall loop.<\/p>\n

Forced update in additional control variant (A\/A\u2019\/B testing)<\/h2>\n

Based on the previous option, we can introduce a Custom Control<\/strong><\/a> variant [8] with a forced update. In this way, we run A\/A\u2019\/B tests<\/strong> instead of traditional A\/B tests. Figure 3 shows how it works. In an A\/A\u2019\/B test, A is Standard Control<\/strong> variant with build in production (version v<\/em>). A\u2019 is Custom Control<\/strong> variant with the exact same build as A, except that the build version is updated to v\u2019<\/em>. B is<\/strong> Treatment<\/strong> variant asking users to update to the next build (version v+1<\/em>). The comparison between A and A\u2019 is mainly used as a sanity check, whereas the comparison between A\u2019 and B is used to derive user experience insights that will be taken into account in ship decisions.<\/p>\n

\"Figure

Figure 3 The framework of A\/A’\/B testing for builds comparison.<\/p><\/div>\n

To avoid the dilution of treatment effect, we could perform filtered analysis drilling down to user activities for version v\u2019 <\/em>and v+1<\/em>. Let\u2019s use the traffic composition in Figure 4 to illustrate how to do it. During A\/A\u2019 comparison, we perform standard analysis by comparing the blue boxes in A with the grey AND blue boxes in A\u2019. During A\u2019\/B comparison, we perform filtered analysis by ONLY comparing the grey boxes in A\u2019 and B.<\/p>\n

\"Figure

Figure 4 Traffic composition change across time. The build version is v for A (Standard Control), v\u2019 for A\u2019 (Custom Control) and v+1 for B (Treatment).<\/p><\/div>\n

We selected A\/A\u2019\/B testing<\/h1>\n

We selected the A\/A\u2019\/B testing proposal due to its simplicity for implementation and analysis.<\/p>\n

Let\u2019s revisit the A\/A test we mentioned at the beginning for which we observed statistically significant differences during analysis. We ran the test again using the proposed framework. During the A versus A\u2019 comparison, we performed standard analysis to get rid of selection bias. About 30% of metrics had highly statistically significant movements. The significance level was 0.001 (much lower than the commonly used 0.05), thus those metric movements were likely to be true positives. Such big gap was mainly caused by update effect. During A\u2019 versus B comparison, the proportion of moved metrics was close to false positive rate (significance level). This A\/A test validated that the framework did work for builds comparison.<\/p>\n

How did we deploy it?<\/h2>\n

We have adopted the framework in a scalable manner and are using it to compare builds regularly. When we deployed it in production, we made a change \u2013 only keeping the A\u2019 and B variants<\/strong>. The reason is that we get limited benefit from A and A\u2019 comparison. If we consider the difference between A and A\u2019 as the baseline, we can only detect an issue in A\u2019 when the metric movements are far away from the baseline. Alternatively, we implemented an automatic process to create a duplicated identical build with a new version whenever there is a new build release. Whenever we start an A\u2019\/B test for a new build, we would send that duplicated build to variant A\u2019. The process ensures that we won\u2019t introduce any issues to A\u2019. One more benefit from not keeping variant A is that we can maximize the traffic allocated to variants A\u2019 and B. In such way, we can increase the metric sensitivity as much as possible.<\/p>\n

The framework did help the team with safe build releases. In a recent A\u2019\/B test for a real build release, we successfully detected a number of statistically significant regressions. These regressions caused the team to halt and investigate the issue before moving forward.<\/p>\n

Summary<\/h1>\n

We were trying to use A\/B testing to compare build releases for Microsoft Teams. We identified that penetration difference and update effect may introduce bias to the A\/B analysis. To mitigate this issue, we introduced an A\/A\u2019\/B testing framework. The framework enables us to regularly perform product builds comparison in a trustworthy way and serves as a gate for safe release of a new build.<\/p>\n

Acknowledgement<\/h1>\n

Special thanks to Microsoft Teams Experimentation team, Microsoft Experimentation Platform team, Microsoft Teams Client Release team, Paola Mejia Minaya, Ketan Lamba, Eduardo Giordano, Peter Wang, Pedro DeRose, Seena Menon, Ulf Knoblich.<\/p>\n

– Robert Kyle, Punit Kishor, Microsoft Teams Experimentation Team<\/em><\/p>\n

– Wen Qin, Experimentation Platform<\/em><\/p>\n

References<\/h1>\n

[1] \u201cMicrosoft Teams.\u201d https:\/\/www.microsoft.com\/en-us\/microsoft-teams\/group-chat-software<\/p>\n

[2] \u201cTeams update process.\u201d https:\/\/docs.microsoft.com\/en-us\/microsoftteams\/teams-client-update<\/p>\n

[3] R. Kohavi and S. Thomke, \u201cThe Surprising Power of Online Experiments.\u201d https:\/\/hbr.org\/2017\/09\/the-surprising-power-of-online-experiments<\/p>\n

[4] R. Kohavi, R. M. Henne, and D. Sommerfield, \u201cPractical guide to controlled experiments on the web: listen to your customers not to the hippo,\u201d in Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining\u00a0 – KDD \u201907<\/em>, San Jose, California, USA, 2007, p. 959. doi: 10.1145\/1281192.1281295.<\/p>\n

[5] T. Crook, B. Frasca, R. Kohavi, and R. Longbotham, \u201cSeven pitfalls to avoid when running controlled experiments on the web,\u201d in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining – KDD \u201909<\/em>, Paris, France, 2009, p. 1105. doi: 10.1145\/1557019.1557139.<\/p>\n

[6] \u201cSelection bias.\u201d https:\/\/en.wikipedia.org\/wiki\/Selection_bias<\/p>\n

[7] N. Chen, M. Liu, and Y. Xu, \u201cHow A\/B Tests Could Go Wrong: Automatic Diagnosis of Invalid Online Experiments,\u201d p. 9, 2019.<\/p>\n

[8] W. Machmouchi, S. Gupta, R. Zhang, and A. Fabijan, \u201cPatterns of Trustworthy Experimentation: Pre-Experiment Stage.\u201d https:\/\/www.microsoft.com\/en-us\/research\/group\/experimentation-platform-exp\/articles\/patterns-of-trustworthy-experimentation-pre-experiment-stage\/<\/p>\n","protected":false},"excerpt":{"rendered":"

Microsoft Teams is a communication platform [1]. It integrates meet, chat, call and collaborate in one place. The application updates multiple times a month [2], with additional new features and iterative improvements to existing features. To ensure high quality user experience across frequent updates, the team needs to actively monitor the quality of each new […]<\/p>\n","protected":false},"author":39246,"featured_media":772504,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":651963,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-770860","msr-blog-post","type-msr-blog-post","status-publish","has-post-thumbnail","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":651963,"type":"group"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/770860"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/39246"}],"version-history":[{"count":17,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/770860\/revisions"}],"predecessor-version":[{"id":775801,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/770860\/revisions\/775801"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/772504"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=770860"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=770860"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=770860"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=770860"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}