{"id":1100547,"date":"2024-11-12T16:13:26","date_gmt":"2024-11-13T00:13:26","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=1100547"},"modified":"2024-11-15T15:44:55","modified_gmt":"2024-11-15T23:44:55","slug":"experimentation-in-genai-c-teams-practices-for-continuous-improvement","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/experimentation-in-genai-c-teams-practices-for-continuous-improvement\/","title":{"rendered":"Experimentation in Generative AI: C++ Team\u2019s Practices for Continuous Improvement"},"content":{"rendered":"\n

By Sinem Akinci<\/a>, Microsoft Developer Division and Cindy Chiu<\/a>, Microsoft Experimentation Platform<\/p>\n\n\n\n

Generative AI<\/a> [1<\/a>] leverages deep learning models to identify underlying patterns and generate original content, such as text, images, and videos. This technology has been applied to various industries, including customer service, marketing, and software development. A popular example is GitHub Copilot, which generates code based on open-source data.<\/p>\n\n\n\n

The generative AI space is undergoing rapid transformation with new updates and changes emerging daily. Products leveraging generative AI must constantly make decisions on the right set of parameters, models, and prompts to find the best combination. Experimentation plays a crucial role in navigating this dynamic landscape, which enables data-driven decision-making and refining generative AI features. As a case study, let\u2019s now explore how the Microsoft C++ team applies this technology in practice, using experimentation to develop and refine GitHub Copilot features.<\/p>\n\n\n\n

In this blog post, we will first provide a general overview of best practices for experimenting and evaluating generative AI features. Then we will highlight some of these practices that the C++ team uses to develop GitHub Copilot features with experimentation. We will explain how these best practices benefit the product. Lastly, we will conclude with an example of a new feature we shipped leveraging these practices.\u00a0<\/p>\n\n\n\n

Methods for making data-driven decisions for generative AI products\u00a0<\/h2>\n\n\n\n

What are qualitative methods?  <\/h3>\n\n\n\n

Qualitative methods<\/a> [2<\/a>] offer valuable insights into the user experience through various approaches such as usability studies, surveys, focus groups, interviews, and diary studies. These methods help uncover the nuances that are hard for quantitative methods to capture, providing an initial understanding of user interactions. However, since qualitative methods often come from smaller sample sizes, they may not provide a complete picture. Instead, these methods enable developers to identify gaps between features and user needs, particularly for generative AI features that involve both model content and user interface.\u00a0\u00a0<\/p>\n\n\n\n

What are quantitative methods? <\/h3>\n\n\n\n

Quantitative methods for evaluating generative AI features can be divided into two categories: offline evaluation and online evaluation.\u00a0<\/p>\n\n\n\n

Offline evaluation<\/strong>, which includes techniques like hyperparameter tuning and grid search, assesses model accuracy and feature performance before deployment. This approach works particularly well when there are known ground truth values and clean datasets. By using various datasets and predefined metrics, developers can compare models and benchmarks cost-effectively without exposing them to actual users. <\/p>\n\n\n\n

Online evaluation<\/strong>, such as A\/B testing, involves exposing the feature to actual customers. It verifies the results observed during offline testing in a real-world context, capturing true user interactions and ensuring the feature performs effectively in production. <\/p>\n\n\n\n

Incorporating all methods into your product lifecycle <\/h3>\n\n\n\n
\"AI
AI solution lifecycle for data science and ML engineering<\/figcaption><\/figure>\n\n\n\n

The generative AI product lifecycle<\/a> [3<\/a>] is an iterative approach to preparing, deploying, and improving a generative AI feature over time. During the experimentation and evaluation stage, offline evaluation is used to assess whether the model performs better than other baselines. Although offline evaluation provides an understanding of model accuracy, it does not represent user interactions, making online testing essential.\u00a0<\/p>\n\n\n\n

A\/B testing helps validate the results by capturing real user interactions. Once the model is deployed, qualitative methods such as user studies can be used to collect user feedback, particularly for features designed for user interaction. This feedback is then incorporated to further refine and improve the feature, completing the product lifecycle. <\/p>\n\n\n\n

Using progressive rollout to test your generative AI feature\u00a0<\/h2>\n\n\n\n

What is progressive rollout? <\/h3>\n\n\n\n

Progressive rollout starts by exposing a new feature to a small percentage of users and gradually rolling it out to more users based on its performance. Traffic as small as a few thousand samples is used to test whether the feature works as expected and observe any movement in user metrics, rather than to make a definitive decision on shipping. <\/p>\n\n\n\n

What\u2019s the benefit of progressive rollout?    <\/h3>\n\n\n\n

Mitigating risk of errors or bias:<\/strong> Due to the non-deterministic nature of AI, generative AI features can sometimes produce unexpected or inappropriate content. By gradually rolling out the feature, developers can be assured that the work they have done to address unexpected output holds up broadly, safeguarding against potential harm. This approach also helps in detecting data quality issues, such as Sample Ratio Mismatch (SRM) or inappropriate data movement, ensuring a more reliable deployment.\u00a0<\/p>\n\n\n\n

Learning and Improvement through performance management:<\/strong> Latency is a key component of performance, and it can significantly impact generative AI products. Users may abandon the feature if the response time is too long. Measuring performance and latency is essential to ensure that the user is getting the intended value in a timely manner. By identifying regressions in performance metrics, such as increased response times or higher crash rates, early on, these issues can be addressed promptly. Progressive rollout not only allows the product team to provide hotfixes while the feature is still exposed to a small percentage of users, but also helps predict capacity needs more accurately, ensuring the best user experience as capacity ramps up.\u00a0<\/p>\n\n\n\n

Iterating experiments to optimize your feature <\/h2>\n\n\n\n

Why run multiple iterations? What are the benefits? <\/h3>\n\n\n\n

Developers frequently run multiple experiments on the same product. As highlighted in the generative AI product lifecycle, after collecting user feedback or analyzing experiment results, developers can iterate on the experiment to better incorporate the feedback and enhance the product. As generative AI models evolve, various models become available for production. One key question is: which model is best for the users? This varies by feature. For instance, AI-assisted renaming functions require quick response times. Renaming occurs during the natural developer flow, which requires a responsive interaction. If this responsiveness isn\u2019t achieved, the feature\u2019s benefit may decrease as developers might prefer to continue their work stream rather than be delayed by latency. Conversely, features like pull request reviewer benefit from models that are capable of more complex reasoning, where precision is more critical than speed. A\/B testing different models helps developers determine whether users prefer faster responses or higher quality.\u00a0<\/p>\n\n\n\n

When iterating over experiments, teams can refine hypotheses, modify treatments, and test new variations. For example, experiments can be conducted on different language models. This iterative method enables an experience in production that maximizes user engagement. Continuous iteration and refinement not only lead to more polished products but also ensure that the product evolves in alignment with user needs and preferences. <\/p>\n\n\n\n

Combing best practices to help C++ users: Copilot in Quick Info case study <\/h2>\n\n\n\n

An example of a feature that the Microsoft C++ team developed using both qualitative and quantitative methods was C++ Copilot integrations in the Quick Info dialog in Visual Studio. <\/p>\n\n\n\n

Copilot in Quick Info is an AI-based feature that provides users with an AI-generated summary of a symbol that they are hovering over. Users need to select \u201cTell me more\u201d on hover to invoke Copilot to provide a summary on this particular symbol. The goal with this feature was to provide users with accurate and quick information of a symbol that may have lacking documentation without switching context. <\/p>\n\n\n\n

\"Example
Example of the symbol that invokes the Copilot in Quick Info feature<\/figcaption><\/figure>\n\n\n\n
\"An
An AI-generated summary of what the function does that appears in the Visual Studio Quick Info<\/figcaption><\/figure>\n\n\n\n

Progressive rollout of initial design <\/h3>\n\n\n\n

After initial development, the C++ team ran an A\/B experiment to measure the feature\u2019s impact on a series of metrics. They defined metrics to ensure that it would provide value to the customer, while not introducing errors to the product. This first iteration of experimentation revealed that this functionality improved engagement with Copilot Chat for C++ users, while not regressing errors. <\/p>\n\n\n\n

Qualitative studies of initial design <\/h3>\n\n\n\n

In tandem, they ran a user study to validate the design of the feature. Notable feedback from the developers interviewed prioritized quick results and wanted an option to follow-up on the response. This feedback was instrumental in shaping the subsequent iterative A\/B experiment. <\/p>\n\n\n\n

Iterative experimentation on feature  <\/h3>\n\n\n\n

In response to this feedback, they ran two follow-up quantitative A\/B experiments. First, to evaluate how quicker results affected user value, they ran an A\/B experiment to swap the model behind the feature to a lightweight but faster model. Second, to evaluate the follow-up prompt, they ran an A\/B experiment with a new \u201cContinue in chat window…\u201d option added below results to measure how this affected product value and ensure it did not introduce errors. <\/p>\n\n\n\n

Iterative A\/B experimentation of AI models can lead to more widespread learnings across product behavior. For example, features that are frequently invoked and close to users’ workflows may benefit from models with faster response times, such as this Quick Info feature. On the other hand, response times may not affect features that provide users with more in-depth levels of information which break user workflow to interpret, such as Fix with Copilot feature. These types of features would benefit more from models that provide more verbosity and accuracy in response. <\/p>\n\n\n\n

Putting things together <\/h2>\n\n\n\n

Determining the effectiveness of our generative AI feature requires a blend of various evaluation methods. We begin by deciding whether to start with quantitative or qualitative approaches. These evaluations are integrated into our product lifecycle to continually enhance our generative AI product. Once our experiment is set up, we progressively roll out the feature to minimize unexpected behavior. We start by testing on a small group before expanding to a broader audience. After obtaining our experiment results, we use them to refine and improve the product through iterative experimentation.\u00a0<\/p>\n\n\n\n

By combining these best practices, we achieve a comprehensive understanding of our generative AI feature’s impact and effectiveness. This holistic approach ensures that our generative AI feature is both user-centric and performance-driven, providing a better user experience and achieving our business goals.\u00a0<\/p>\n\n\n\n

\u2013 Sinem Akinci (Microsoft Developer Division), Cindy Chiu (Microsoft Experimentation Platform) <\/p>\n\n\n\n

References<\/h2>\n\n\n\n

[1] Reddington, C. (2024, May 14). How companies are boosting productivity with generative AI<\/em>. The GitHub Blog. https:\/\/github.blog\/ai-and-ml\/generative-ai\/how-companies-are-boosting-productivity-with-generative-ai\/#what-is-generative-ai (opens in new tab)<\/span><\/a>\u00a0<\/p>\n\n\n\n

[2] Peckham, S., & Day, J. (2024, July 1). Generative AI<\/em>. Microsoft Learn. https:\/\/learn.microsoft.com\/en-us\/ai\/playbook\/technology-guidance\/generative-ai\/ (opens in new tab)<\/span><\/a><\/p>\n\n\n\n

[3] <\/em>Stevenson, J., & Ostrowski, S. (2022, February 11). Measurably improve your product by combining qualitative and quantitative methods.<\/em> Microsoft Research. https:\/\/www.microsoft.com\/en-us\/research\/articles\/measurably-improve-your-product-by-combining-qualitative-and-quantitative-methods\/ <\/a><\/p>\n\n\n\n

<\/div>\n","protected":false},"excerpt":{"rendered":"

By Sinem Akinci, Microsoft Developer Division and Cindy Chiu, Microsoft Experimentation Platform Generative AI [1] leverages deep learning models to identify underlying patterns and generate original content, such as text, images, and videos. This technology has been applied to various industries, including customer service, marketing, and software development. A popular example is GitHub Copilot, which […]<\/p>\n","protected":false},"author":43668,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":651963,"footnotes":""},"research-area":[13563],"msr-locale":[268875],"msr-post-option":[269148,269142],"class_list":["post-1100547","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-research-area-data-platform-analytics","msr-locale-en_us","msr-post-option-approved-for-river","msr-post-option-include-in-river"],"msr_assoc_parent":{"id":651963,"type":"group"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/1100547"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/43668"}],"version-history":[{"count":22,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/1100547\/revisions"}],"predecessor-version":[{"id":1104729,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/1100547\/revisions\/1104729"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1100547"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1100547"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1100547"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1100547"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}