{"id":933939,"date":"2023-04-18T09:00:00","date_gmt":"2023-04-18T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=933939"},"modified":"2023-08-29T09:50:16","modified_gmt":"2023-08-29T16:50:16","slug":"automatic-post-deployment-management-of-cloud-applications","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/automatic-post-deployment-management-of-cloud-applications\/","title":{"rendered":"Automatic post-deployment management of cloud applications"},"content":{"rendered":"\n
\"SelfTune<\/figure>\n\n\n\n

Cloud Intelligence\/AIOps blog series<\/h2>\n\n\n\n

In the first two blog posts in this series, we presented our vision for Cloud Intelligence\/AIOps (AIOps) research, and scenarios where innovations in AI technologies can help build and operate complex cloud platforms and services effectively and efficiently at scale. In this blog post, we dive deeper into our efforts to automatically manage large-scale cloud services in deployment. In particular, we focus on an important post-deployment cloud management task that is pervasive across cloud services \u2013 tuning configuration parameters. And we discuss SelfTune, a horizontal reinforcement learning (RL) platform for successful configuration management of various cloud services in deployment.<\/p>\n\n\n\n

\n
Read part 1<\/a><\/div>\n\n\n\n
Read part 2<\/a><\/div>\n<\/div>\n\n\n\n

Post-deployment management of cloud applications<\/h2>\n\n\n\n

Managing cloud applications includes mission-critical tasks such as resource allocation, scheduling, pre-provisioning, capacity planning and provisioning, and autoscaling. Currently, several of these tasks rely on hand-tuned and manually designed algorithms, heuristics, and domain knowledge. For a large cloud company like Microsoft, a hand-tuned, manually designed algorithm works well only to a certain extent, because deployments are extremely varied, large-scale, and involve complex interactions of various components. Moreover, user, customer, and application behavior can change over time, making yesterday\u2019s hand-tuning not as relevant today and even less so in the future. The varied nature of today\u2019s cloud technologies forces our engineers to spend an inordinate amount of time on special casing, introducing new configuration parameters, and writing or rewriting heuristics to set them appropriately. This also creates a lot of undocumented domain knowledge and dependence on a few individuals to solve significant problems. All of this, we believe, is unsustainable in the long term. <\/p>\n\n\n\n

As we discussed in the earlier posts in this blog series, the right AI\/ML formulations and techniques could help to alleviate this problem. Specifically, cloud management tasks are a natural fit for adopting the reinforcement learning paradigm. These tasks are repetitive in space and time; they run simultaneously on multiple machines, clusters, datacenters, and\/or regions, and they run once every hour, day, week, or month. For instance, the VM pre-provisioning service for Azure Functions is a continuously running process, pre-provisioning for every application. Scheduling of background jobs on substrate runs separately on every machine. Reinforcement learning also needs a repetitive and iterative platform to converge on an optimized setup and, hence, can go together with the basic functioning of the cloud management task.<\/p>\n\n\n\n

<\/div>\n\n\n\n\t
\n\t\t\n\n\t\t

\n\t\tMicrosoft research podcast<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"photo\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

What\u2019s Your Story: Lex Story<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

Model maker and fabricator Lex Story helps bring research to life through prototyping. He discusses his take on failure; the encouragement and advice that has supported his pursuit of art and science; and the sabbatical that might inspire his next career move.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tListen now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n

Our goal is to reduce manual effort in ensuring service efficiency, performance, and reliability by augmenting, complimenting, or replacing existing heuristics for various management tasks with general RL-based solutions. In this blog post, we present our recent solution frameworks for cloud applications, to automatically tune their configuration parameters and to design policies for managing the parameters over time. Our solutions require minimal engineering effort and no AI expertise from the application developers or cloud operators.<\/p>\n\n\n\n

Example Microsoft scenarios<\/h2>\n\n\n\n

O365 Workload Manager: <\/strong>Workload Manager (WLM) is a process that runs on each of the backend Exchange Online (EXO) servers to help schedule resources (CPU, disk, network) to background jobs that periodically execute. WLM has several configuration parameters that need to be carefully set so that the throughput of the scheduler is maximized while also ensuring that the resources are not too strained to execute low-latency user-facing jobs (e.g., Outlook search). Could we help EXO infrastructure manage the various knobs that dictate the control logic implemented in the scheduler for optimizing resource management and user latency?<\/em><\/p>\n\n\n\n

Azure ML\/Spark:<\/strong> Spark is the platform for performing distributed data analytics, and it comes with various configuration knobs that need to be appropriately set by developers based on their job context: Does the query involve JOIN clauses? How big are the data shards? The workload patterns change over time, and pre-trained models for choosing optimal configurations may not suffice. Can we help developers dynamically choose the deployment configuration based on workload signals?<\/p>\n\n\n\n

Azure Functions VM management:<\/strong> Can we tune the VM management policy implemented in Azure Functions for VM pre-fetching\/eviction to minimize cold starts and memory wastage over time? Our results<\/a> in simulations are quite encouraging. We want to engage with the Azure and MSR Redmond teams to discuss the possibility of tuning the policy in the production setting.<\/p>\n\n\n\n

Azure Kubernetes Service:<\/strong> AKS is chosen by first-party as well as third-party Azure customers for facilitating containerized development and deployment of cloud applications. The in-built workload autoscaling policies in AKS use several configuration parameters, which can be far from optimal in several scenarios. Can we help automatically adjust the parameters that govern resource allocation to containers running microservices based on applications\u2019 workload patterns?<\/p>\n\n\n\n

Horizontal solution design for configuration tuning<\/h2>\n\n\n\n

We see three main reasons why this is the right time to design and incorporate an RL-based solution framework across cloud management tasks:<\/p>\n\n\n\n

    \n
  1. As the size and complexity of services in the cloud continue to increase, as our hardware footprint continues to include many SKUs, and as configuration and code get larger and more complex, heuristics and hand-tuning cannot provide optimal operations at all times. Not without significant and proportionate investment in human experts and engineers.<\/li>\n\n\n\n
  2. While we will have to rely on domain experts for key changes in systems and the services landscape on the cloud, using RL sub-systems can help reduce dependence on expert decisions and domain-knowledge over time.<\/li>\n\n\n\n
  3. It is important to have a horizontal framework with a simple yet expressive API, with appropriate algorithms for tuning configuration parameters in an online fashion to optimize a developer-specific metric of interest or reward.<\/li>\n<\/ol>\n\n\n\n

    SelfTune framework<\/h2>\n\n\n\n

    We have designed and deployed the SelfTune framework to help cloud service developers automatically tune the configuration parameters in their codebase, which would otherwise be manually set or heuristically tweaked. SelfTune is an RL-based framework that helps developers automate complex post-deployment cloud management tasks such as parameter tuning and performance engineering.<\/p>\n\n\n\n

    SelfTune is hosted as a service on the public Azure cloud. First-party applications that are interested in post-deployment parameter tuning can use RestAPI calls to access SelfTune endpoints. The SelfTune framework has two components:<\/p>\n\n\n\n

      \n
    1. Client API<\/strong> provides necessary support to access the SelfTune endpoints via RestAPI calls, namely, Predict<\/strong> for getting the parameters from the framework and SetReward<\/strong> for providing reward\/feedback to the framework.<\/li>\n\n\n\n
    2. RL Engine <\/strong>implements a suite of ML\/RL algorithms for periodically updating the parameters and returning the latest values to the clients as well as for periodically computing the reward metrics.<\/li>\n<\/ol>\n\n\n\n

      At the core of the SelfTune framework is the formulation of the post-deployment parameter tuning problem as that of \u201conline learning from bandit feedback.\u201d SelfTune assumes that the only interaction possible with the external system (i.e., the application being tuned) is a black-box access to some form of feedback (e.g., daily P95 latency of the service). The framework repeatedly deploys <\/em>configuration parameters and observes <\/em>the corresponding rewards after a developer-defined period. As the operational environment (e.g., production cluster running certain types of workloads) is constantly in flux, there is no single setting of parameters that will remain optimal throughout. Thus, SelfTune continuously runs the explore-exploit <\/em>paradigm of RL techniques \u2013 explore<\/em> new parameters in the vicinity of the currently deployed parameters, observe rewards, update its internal model based on the reward, and exploit<\/em> parameters that tend to give high rewards over time.<\/p>\n\n\n\n

      We have designed a bandit learning algorithm called Bluefinin SelfTune that crystallizes the aforementioned idea. Our algorithm has lower sample complexity, which means it takes a lower number of rounds for the algorithm to converge to desired values when we want to tune multiple real-valued parameters simultaneously, compared to peer techniques like multi-arm bandits (which is the base<\/a> of Azure Personalizer), Bayesian Optimization (used by the MLOS<\/a> framework), or genetic algorithms. This is provable under some assumptions on the reward function, but we observe, across multiple deployments, that the algorithm converges to good solutions in practice even when theoretical assumptions are often violated.<\/p>\n\n\n\n

      We have open-sourced Bluefin through Vowpal Wabbit, a popular RL library for practitioners, which houses the core algorithms of Azure Personalizer. We are continuing to work on designing vertical RL algorithms and horizontal feature learning for the systems domain. Besides Bluefin, SelfTune supports a suite of black-box optimization (e.g. Bayesian Optimization) and RL techniques (e.g., Deep Deterministic Policy Gradients) that the cloud applications can choose from, based on their needs.<\/p>\n\n\n\n

      A simple integration use case:<\/strong> <\/strong>Consider the scenario of setting PySpark cluster configuration parameters for Azure ML jobs that are spawned for ML workloads in the O365 MS-AI organization. The workloads are composed of various data processing jobs and run on various Azure ML clusters with different capacities and hardware. It is non-trivial to set parameters for various jobs, such that the workloads complete quickly, and not fail in the middle due to resourcing issues thereby losing all computations.<\/p>\n\n\n\n

      Basic SelfTune workflow: <\/strong>The basic integration of SelfTune in the AzureML pipeline is illustrated in the figure below. Here, the developer wants to tune seven key Apache PySpark parameters per job, namely driver memory, driver cores, executor cores, number executors, executor memory, spark.sql.shuffle.partitions, and spark.default.parallelism.<\/p>\n\n\n\n

      \"Basic<\/figure>\n\n\n\n
        \n
      1. Developer invokes Predict<\/strong> on the SelfTune instance, asking for the parameters for the next job.<\/li>\n\n\n\n
      2. SelfTune service responds with the predicted parameters for the next job.<\/li>\n\n\n\n
      3. The developer submits a job using SelfTune\u2019s predicted parameters. \/\/outside SelfTune\u2019s purview<\/li>\n\n\n\n
      4. Once the job is complete, the cluster sends job meta data to the data store. \/\/ outside SelfTune\u2019s purview<\/li>\n\n\n\n
      5. Developer queries rewards for previously completed jobs, if any, from Data Store (e.g., Azure ML workspace).<\/li>\n\n\n\n
      6. Data Store responds with the rewards (e.g., job completion times, which is part of the job meta-data) from previously completed jobs.<\/li>\n\n\n\n
      7. If the rewards exist in the store, the developer invokes SetReward<\/strong> for those jobs (which pushes the rewards to the SelfTune service endpoint hosted somewhere).<\/li>\n<\/ol>\n\n\n\n

        Self-tuning substrate background jobs scheduler<\/h2>\n\n\n\n

        User-level background job scheduling<\/strong>: All the substrate backend servers in EXO datacenters (that host user mailboxes) run hundreds of low-priority, latency-insensitive, periodic workloads locally (e.g., mailbox replication, encryption, event-driven assistants, etc.). Workload Management (WLM) is a core substrate service that runs on all such backend servers. It helps with the user-level scheduling of workloads on the servers: a) with the goal of completing the tasks when resources become available (at micro-granular timescales), and b) mindful of the fact that high-priority, latency-sensitive workloads will bypass this scheduler. Thus, ensuring high availability of resources especially during peak hours is critical, besides meeting workload SLAs.<\/p>\n\n\n\n

        Tuning real-valued configuration parameters<\/strong>: The scheduler is implemented today as part of a huge codebase in the substrate core. The scheduler trades off resource utilization and completion rates by dynamically ramping up and ramping down the number of concurrent background tasks requiring access for the resources. This is achieved by carefully setting several configuration settings (hundreds of real-valued parameters). At a server level, we can achieve better resource utilization and throughput, by automatically tuning the key parameters, based on the workloads it receives and the ensuing resource health fluctuations.<\/p>\n\n\n\n

        Impact of using SelfTune in WLM<\/strong>: We have integrated SelfTune with the substrate background scheduler codebase (the change required is simple, on the order of tens of lines of code, as shown in the figure below). We first deployed in the inner rings of substrate (over 3000+ servers). The results gathered over 4-5 weeks of deployment clearly indicate that tuning helps on most of the deployed servers, increasing throughput at least 20% across multiple forests in their heavily throttled servers, with a marginal increase in CPU health and insignificant-to-mild degradation of disk health. Based on this validation, we have now rolled out SelfTune integration to most EXO backend servers (nearly 200,000) across the worldwide production ring.<\/p>\n\n\n\n

        \"SelfTune<\/figure>\n\n\n\n

        Ongoing work and future AI+systems research<\/h2>\n\n\n\n

        SelfTune is a general platform and can be readily applied to many RL-for-cloud scenarios without any additional feature engineering or onboarding efforts (which are typically required in AIOps). We expect that developers can define a suitable spatial and temporal tuning scope in the service\/system, tuning the parameters of the service running in the cluster, at the level of machines, every hour of every day. Thus, instead of hand-coding the optimal operating points for various machines or various clusters that the service operates in, we could integrate SelfTune in the service codebase to dynamically figure them out over time, based on the real-time feedback at a determined temporal granularity.<\/p>\n\n\n\n

        Our work poses a lot of interesting design and algorithmic questions in this space. For instance, can we automatically scope the tuning problem based on some observed context such as cluster type, hardware, workload volumes, etc., and find optimal parameters per scope? Given that typical cloud applications have hundreds, if not thousands, of knobs to tune, can we automatically identify the knobs that impact the performance metric of interest, and then tune those knobs more efficiently<\/a>?<\/p>\n\n\n\n

        A combination of system insights, ML formulations, and cross-layer optimization is vital for effective post-deployment management of cloud applications and services. We will post an update to this blog post on our ongoing work in this space soon. Meanwhile, the final blog post in this series will explore how AIOps can be made more comprehensive by spanning the entire cloud stack.<\/p>\n\n\n\n

        \n
        Read part 4<\/a><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"

        In the first two blog posts in this series, we presented our vision for Cloud Intelligence\/AIOps (AIOps) research, and scenarios where innovations in AI technologies can help build and operate complex cloud platforms and services effectively and efficiently at scale. In this blog post, we dive deeper into our efforts to automatically manage large-scale cloud […]<\/p>\n","protected":false},"author":42183,"featured_media":934842,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[1],"tags":[],"research-area":[13556,13547],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-933939","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-research-area-systems-and-networking","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[395930],"related-projects":[887322,568491],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Nagarajan Natarajan","user_id":37311,"display_name":"Nagarajan Natarajan","author_link":"Nagarajan Natarajan<\/a>","is_active":false,"last_first":"Natarajan, Nagarajan","people_section":0,"alias":"nagarajn"},{"type":"guest","value":"lei-zhao","user_id":"935664","display_name":"Lei Zhao","author_link":"Lei Zhao","is_active":true,"last_first":"Zhao, Lei","people_section":0,"alias":"lei-zhao"},{"type":"user_nicename","value":"Rodrigo Fonseca","user_id":40429,"display_name":"Rodrigo Fonseca","author_link":"Rodrigo Fonseca<\/a>","is_active":false,"last_first":"Fonseca, Rodrigo","people_section":0,"alias":"rofons"},{"type":"guest","value":"tatiana-racheva","user_id":"935667","display_name":"Tatiana Racheva","author_link":"Tatiana Racheva","is_active":true,"last_first":"Racheva, Tatiana","people_section":0,"alias":"tatiana-racheva"},{"type":"guest","value":"yogesh-bansal","user_id":"935670","display_name":"Yogesh Bansal","author_link":"Yogesh Bansal","is_active":true,"last_first":"Bansal, Yogesh","people_section":0,"alias":"yogesh-bansal"}],"msr_type":"Post","featured_image_thumbnail":"\"SelfTune","byline":"","formattedDate":"April 18, 2023","formattedExcerpt":"In the first two blog posts in this series, we presented our vision for Cloud Intelligence\/AIOps (AIOps) research, and scenarios where innovations in AI technologies can help build and operate complex cloud platforms and services effectively and efficiently at scale. In this blog post, we…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/933939"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42183"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=933939"}],"version-history":[{"count":16,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/933939\/revisions"}],"predecessor-version":[{"id":964524,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/933939\/revisions\/964524"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/934842"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=933939"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=933939"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=933939"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=933939"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=933939"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=933939"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=933939"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=933939"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=933939"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=933939"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=933939"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}