{"id":110854,"date":"2017-06-29T10:00:00","date_gmt":"2017-06-29T17:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/power-platform\/blog\/power-automate\/tracking-deployments\/"},"modified":"2025-06-11T08:13:03","modified_gmt":"2025-06-11T15:13:03","slug":"tracking-deployments","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/power-platform\/blog\/power-automate\/tracking-deployments\/","title":{"rendered":"Flow of the Week: Tracking Deployments"},"content":{"rendered":"
Hello Flow Community!<\/p>\n
\u200bToday were bringing you a post from one of our internal Engineers. This post is about a Flow that we, the team use in our own environment and work day.<\/em><\/p>\n The Microsoft Flow portal and the backend service are deployed to multiple Azure regions. New features and bug fixes are deployed by the team at a regular cadence. The deployment is done through a safe deployment sequence \u2013 an approach wherein deployment proceeds from regions with least usage to regions with highest usage. During a deployment, the team may get an incident through automated runners or through a customer report. At this point, the team must investigate the incident and make several decisions. This involves:<\/p>\n The time required to investigate and take corrective actions can be significantly reduced if we have all relevant information easily accessible.<\/p>\n \u00a0<\/p>\n We solve part of the problem by tracking current and historical information about deployments through Microsoft Flow. We do this by maintaining the current deployment snapshot and the log of previous deployments in two SharePoint lists. In this blog, we will walk through the flow that helps capture these data points for reference during investigations.<\/p>\n \u00a0<\/p>\n Microsoft Flow portal and the backend service log information that helps monitor the health of the service. These logs move through a data pipeline and finally land in Kusto, an internal data warehouse. We have a custom connector for Kusto which allows us to query data in Kusto through a flow. Once we get the list of recent deployments from Kusto, we need to iterate over each deployment record to process them \u2013 so, we add an \u201cApply to each\u201d block. Since the flow runs at a frequency of 5 minutes but looks back at 1 hour of data, it is possible that some of the deployment information has already been processed. To handle this case, we check if for a record a corresponding item has been created in the \u201cDeployments history\u201d list. Since we know the exact record that we are looking for, we can construct an ODATA filter query to get an exact match of this record \u2013 if it is in the list. \u00a0<\/p>\n Since the filter on Current Deployment snapshot list returns a list of matches \u2013 even though it will always return a single item \u2013 we need to extract the first object of the list. We do this by using a Compose action followed by a ParseJson action. The output of ParseJson action can be used in subsequent actions. \u00a0<\/p>\n Using Microsoft Flow it was super easy to build a no code solution to both track deployment history and track the state of current deployments across regions.<\/p>\n","protected":false},"excerpt":{"rendered":" The Microsoft Flow Engineering team member Kartik shares about Flows that the team itself is using to help our work be more efficient.<\/p>\n","protected":false},"author":362,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ms_queue_id":[],"ep_exclude_from_search":false,"_classifai_error":"","_classifai_text_to_speech_error":"","_alt_title":"","ms-ems-related-posts":[],"footnotes":""},"audience":[3378],"content-type":[],"job-role":[],"product":[3474],"property":[],"topic":[],"coauthors":[2928],"class_list":["post-110854","post","type-post","status-publish","format-standard","hentry","audience-it-professional","product-power-automate"],"yoast_head":"\n\n
Solution<\/h2>\n

<\/p>\nGetting the version of current deployed bits<\/h2>\n
\n\u00a0 
\nWe can get the version of the current deployed bits for both the portal and backend service by making a query to Kusto. Further, since a deployment can complete anytime, we run the query every 5 minutes. Even though we run the flow at a 5 minutes frequency, we look at the last 1 hour of data to get the list of recent deployments. This is done to factor in any latency in copying logs from the deployed service to Kusto, clock skews and even transient failures in downstream systems.<\/p>\nProcessing the list of recent deployments<\/h2>\n
\n\u00a0 
\nWe determine whether a corresponding item is in the Deployment history list by checking the length of the item list returned by the above action. If the length is 0 \u2013 which means no corresponding item was found in the Deployment History list \u2013 we then try to find the corresponding item in the Current Snapshot list. This is again done by using an ODATA filter to find an exact match. Since the Current Snapshot list was initially created to have one item per Environment, Role, and Region, we will always find one unique record.<\/p>\n
<\/p>\nCreate new deployment history item and update current snapshot<\/h2>\n
\n\u00a0 
\nFinally, we create a new item in the Deployment History list and update the Current Deployment
\n\u00a0
<\/p>\nSummary<\/h2>\n