Azure updates Archives - Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/tag/azure-updates/ How Microsoft does IT Wed, 17 Apr 2024 16:32:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 137088546 How Microsoft is transforming its own patch management with Azure http://approjects.co.za/?big=insidetrack/blog/how-microsoft-is-transforming-its-own-patch-management-with-azure/ Wed, 17 Apr 2024 16:14:17 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=4821 [Editor’s note: Azure Update Management is becoming Update Management Center, which is currently available in public preview. Please note that this content was written to highlight a particular event or moment in time. Although that moment has passed, we’re republishing it here so you can see what our thinking and experience was like at the […]

The post How Microsoft is transforming its own patch management with Azure appeared first on Inside Track Blog.

]]>
Microsoft Digital stories[Editor’s note: Azure Update Management is becoming Update Management Center, which is currently available in public preview. Please note that this content was written to highlight a particular event or moment in time. Although that moment has passed, we’re republishing it here so you can see what our thinking and experience was like at the time.]

At Microsoft Digital Employee Experience (MDEE), patch management is key to our server security practices. That’s why we set out to transform our operational model with scalable DevOps solutions that still maintain enterprise-level governance. Now, MDEE uses Azure Update Management to patch tens of thousands of our servers across the global Microsoft ecosystem, both on premises and in the cloud, in Windows and in Linux.

With Azure Update Management, we have a scalable model that empowers engineering teams to take ownership of their server updates and patching operations, giving them the agility they need to run services according to their specific business needs. We’ve left our legacy processes behind and are meeting our patch compliance goals month after month since implementing our new, decentralized DevOps approach. Here’s an overview of how we completed the transformation.

For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=rXB8ez9XVqc, select the “More actions” button (three dots icon) below the video, and then select “Show transcript.”

Experts discuss the process and tools Microsoft is using to natively manage cloud resources through Azure.

The journey to Azure Update Management

Back in January 2017, the Microsoft Digital Manageability Team started transitioning away from the existing centralized IT patching service and its use of Microsoft System Center Configuration Manager. We planned a move to a decentralized DevOps model to reduce operations costs, simplify the service and increase its agility, and enable the use of native Azure solutions.

Microsoft Digital was looking for a solution that would provide overall visibility and scalable patching while also enabling engineering teams to patch and operate their servers in a DevOps model. Patch management is key to our server security practices, and Azure Update Management provides the feature set and scale that we needed to manage server updates across the Microsoft Digital environment.

Azure Update Management can manage Linux and Windows, on premises and in cloud environments, and provides:

  • At-scale assessment capabilities
  • Scheduled updates within specified maintenance windows
  • Logging to troubleshoot update failures

We also took advantage of new advanced capabilities, including:

  • Maintenance windows that distinguish and identify servers in Azure based on subscriptions, resource groups, and tags
  • Pre/post scripts that run before and after the maintenance window to start turned-off servers, patch them, and then turn them off again
  • Server reboot options control
  • Include/exclude of specific patches
A graphic showing the solution architecture for Azure Update Management implementation.
This graphic demonstrates the solution architecture for our complex Microsoft Digital environment.

Completing that transformation with Azure Update Management required the Manageability Team to achieve three main goals:

  • Enhance compliance reporting to give engineering teams a reliable and accurate “source of truth” for patch compliance.
  • Ensure that 95 percent of the total server population in the datacenter would be compliant for all vulnerabilities being scanned, enabling a clean transfer of patching duties to application engineering teams.
  • Implement a solution that could patch at enterprise scale.

Microsoft Digital enhanced reporting capabilities by creating a Power BI report that married compliance scan results with the necessary configuration management database details. This provided a view on both current and past patch cycle compliance, setting a point-in-time measure within the broader context of historic trends. Engineers were now able to quickly and accurately remediate without wasting time and resources.

The report also included 30-day trend tracking and knowledge base (KB)-level reporting. The Manageability Team also gathered feedback from engineering groups to make dashboard enhancements like adding pending KB numbers on noncompliant servers and information about how long a patch was pending on a server.

We focused on achieving that 95 percent key performance indicator by “force remediating” older vulnerabilities first by upgrading or uninstalling older applications. With Configuration Manager consistently landing patches each cycle, engineering teams began to consistently meet the 95 percent goal.

Finally, as a native Azure solution available directly through the Azure portal, Azure Update Management provided the flexibility and features needed for engineering teams to remediate vulnerabilities while satisfying these conditions at scale.

[Explore harnessing first-party patching technology to drive innovation at Microsoft. Discover boosting Windows internally at Microsoft with a transformed approach to patching. Unpack Microsoft’s cloud-centric architecture transformation.]

 Decoding our transformation

In the past, “white glove” application servers required additional coordination or extra steps during patching, like removing a server from network load balancing or stopping a service before patches could be applied. The traditional system typically required a patching team to coordinate patch deployment with the team that owned the application, all to ensure that the application would not be affected by recently installed patches.

We implemented a number of changes to transition smoothly from that centralized patching service to using Azure Update Management as our enterprise solution. Our first step was to deliver demos to help engineering teams learn to use Azure Update Management. These sessions covered everything from the prerequisites necessary to enable the solution in Azure to how to schedule servers, apply patches, and troubleshoot failures.

The Manageability Team also drew from its own experience getting started with Azure Update Management to create a toolkit to help engineering teams make the same transition. The toolkit provided prerequisite scripts, like adding the Microsoft Monitoring Agent extension and creating an Azure Log Analytics workspace. It also contained a script to set up Azure Security Center when teams had already created default workspaces; since Azure Update Management supports only one automation account and Log Analytics workspace, the script cleaned up the automation account and linked it to the workspace used for patching.

Next, the Manageability Team took on proving scalability across the datacenter environment. The goal was to take a subset of servers from the centralized patching service in Configuration Manager and patch them through Azure Update Management. They created Scheduled Deployments within the Azure Update Management solution that used the same maintenance windows as those used by Configuration Manager. After validating the servers’ prerequisites, they moved the servers into the deployments so that during that maintenance window, Azure Update Management was patching the servers instead of Configuration Manager.

With that successful scalability exercise completed, the final step was to turn off Configuration Manager as the centralized service’s “patching engine.” Microsoft Digital had set a specific deadline for this transformation, and right on time the team turned off the Software Update Manager policy in Configuration Manager. This ensured that Configuration Manager would no longer be used for patching activities but would still be available for other functionality.

After the transition was complete, the Manageability Team monitored closely to ensure that decentralization did not negatively affect compliance. In almost every month since the transition, the Microsoft Digital organization has consistently achieved the 95 percent compliance goal. 

Refining update management

We’re now hard at work on the next evolution in our Azure Update Management journey to even further optimize operational costs, accelerate patch compliance, and improve the end-to-end patching experience. Most recently, we’ve implemented automated notifications that send emails and create tickets when servers are not compliant, so that teams can quickly remediate.

Microsoft Digital Employee Experience will continue to build tools and automation that improve the patching experience and increase compliance. We’re evaluating, adapting, and providing our engineering teams with guidance as new features are released into the Azure Update Management service.

Related links

 

The post How Microsoft is transforming its own patch management with Azure appeared first on Inside Track Blog.

]]>
4821
How Microsoft is modernizing its internal network using automation http://approjects.co.za/?big=insidetrack/blog/how-microsoft-is-modernizing-its-internal-network-using-automation/ Wed, 11 Dec 2019 23:20:08 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=5033 After Microsoft moved its workload of 60,000 on-premises servers to Microsoft Azure, employees could set up systems and virtual machines (VMs) with a push of a few buttons. Although network hardware servers have changed over time, the way that network engineers work isn’t nearly as modern. “With computers, we have modernized our processes to follow […]

The post How Microsoft is modernizing its internal network using automation appeared first on Inside Track Blog.

]]>
Microsoft Digital storiesAfter Microsoft moved its workload of 60,000 on-premises servers to Microsoft Azure, employees could set up systems and virtual machines (VMs) with a push of a few buttons.

Although network hardware servers have changed over time, the way that network engineers work isn’t nearly as modern.

“With computers, we have modernized our processes to follow DevOps processes,” says Bart Dworak, a software engineering manager on the Network Automation Delivery Team in Microsoft Digital. “For the most part, those processes did not exist with networking.”

Two years ago, Dworak says, network engineers still created and ran command-line-based scripts and created configuration change reports.

“We would sign into network devices and submit changes using the command line,” Dworak says. “In other, more modern systems, the cloud provides desired-state configurations. We should be able to do the same thing with networks.”

It became clear that Microsoft needed modern technology for configuring and managing the network, especially as the number of managed network devices increased on Microsoft’s corporate network. This increase occurred because of higher network utilization by users, applications, and devices as well as more complex configurations.

“When I started at Microsoft in 2015, our network supported 13,000 managed devices,” Dworak says. “Now, we surpassed 17,000. We’re adding more devices because our users want more bandwidth as they move to the cloud so they can do more things on the network.”

[Learn how Microsoft is using Azure ExpressRoute hybrid technology to secure the company.]

Dworak and the Network Automation Delivery Team saw an opportunity to fill a gap in the company’s legacy network-management toolkit. They decided to apply the concept of infrastructure as code to the domain of networking.

“Network as code provides a means to automate network device configuration and transform our culture,” says Steve Kern, a Microsoft Digital senior program manager and leader of the Network Automation Delivery Team.

The members of the Network Automation Delivery Team knew that implementing the concept of network as code would take time, but they had a clear vision.

“If you’ve worked in a networking organization, change can seem like your enemy,” Kern says. “We wanted to make sure changes were controlled and we had a routine, peer-reviewed rhythm of business that accounted for the changes that were pushed out to devices.”

The team has applied the concept of network as code to automate processes like changing the credentials on more than 17,000 devices at Microsoft, which now occurs in days rather than weeks. The team is also looking into regular telemetry data streaming, which would inform asset and configuration management.

“We want network devices to stream data to us, rather than us collecting data from them,” Dworak says. “That way, we can gain a better understanding of our network with a higher granularity than what is available today.”

The Network Automation Delivery Team has been working on the automation process since 2017. To do this, the team members built a Git repository and started with simple automation to gain momentum. Then, they identified other opportunities to apply the concept of GitOps—a set of practices for deployment, management, and monitoring—to deliver network services to Microsoft employees.

Implementing network as code has led to an estimated savings of 15 years of labor and vendor spending on deployments and network devices changes. As network technology shifts, so does the role of network engineers.

“We’re freeing up network engineers so they can build better, faster, and more reliable networks,” Kern says. “Our aspiration is that network engineers will become network developers who write the code. Many of them are doing that already.”

Additionally, the team is automating how it troubleshoots and responds to outages. If the company’s network event system detects that a wireless access point (AP) is down, it will automatically conduct diagnostics and attempt to address the AP network outage.

“The building AP is restored to service in less time than it would take to wake up a network engineer in the middle of the night, sign in, and troubleshoot and remediate the problem,” Kern says.

Network as code also applies a DevOps mentality to network domain by applying software development and business operations practices to iterate quickly.

“We wanted to bring DevOps principles from the industry and ensure that development and operations teams were one and the same,” Kern says. “If you build something, you own it.”

In the future, the network team hopes to create interfaces for each piece of network gear and have application developers interact with the API during the build process. This would enable the team to run consistent deployments and configurations by restoring a network device entirely from a source-code repository.

Dworak believes that network as code will enable transformation to occur across the company.

“Digital transformation is like remodeling a house. You can remodel your kitchen, living room, and other parts of your house, but first you have to have a solid foundation,” he says. “Your network is part of the foundation—transforming networking will allow others to transform faster.”

Related links

The post How Microsoft is modernizing its internal network using automation appeared first on Inside Track Blog.

]]>
5033