cloud migration Archives - Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/tag/cloud-migration/ How Microsoft does IT Mon, 28 Oct 2024 20:40:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 137088546 Microsoft uses a scream test to silence its unused servers http://approjects.co.za/?big=insidetrack/blog/microsoft-uses-a-scream-test-to-silence-its-unused-servers/ Sat, 17 Aug 2024 08:00:59 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=4908 Do you have unused servers on your hand? Don’t be alarmed if I scream about it—it’ll be for a good reason (and not just because it’s almost Halloween)! Check out Pete Apple’s expedition to the cloud series The learnings, pitfalls, and compromises of Microsoft’s expedition to the cloud Managing Microsoft Azure solutions on Microsoft’s expedition […]

The post Microsoft uses a scream test to silence its unused servers appeared first on Inside Track Blog.

]]>
Microsoft Digital PerspectivesDo you have unused servers on your hand? Don’t be alarmed if I scream about it—it’ll be for a good reason (and not just because it’s almost Halloween)!

I talked previously about our efforts here in Microsoft Digital to inventory our internal-to-Microsoft on-premises environments to determine application relationships (mapping Microsoft’s expedition to the cloud with good cartography) as well as look at performance info for each system (the awesome ugly truth about decentralizing operations at Microsoft with a DevOps model).

With this info, it was time to begin making plans to move to the cloud. Looking at the data, our overall CPU usage for on-premises systems was far lower than we thought—averaging around six percent! We realized this was so low due to many underutilized systems. First things first, what to do with the systems that were “frozen,” or not being used, based upon the 0-2 percent CPU they were utilizing 24/7?

We created a plan to closely examine those assets towards the goal of moving as few as possible. We used our home-built change management database (CMDB) to check whether there was a recorded owner. In some cases, we were able to work with that owner and retire the system.

Before we turned even one server off, we had to be sure it wasn’t being used. (If a server is turned off and no one is there to see it, does it make a sound?)

Developing a scream test

Apple sits at a table while holding a paper airplane and talks to someone off-screen
Pete Apple, a cloud services engineer in Microsoft Digital, shares how Microsoft scares teams that have unused servers that need to be turned off. (Photo by Jim Adams | Inside Track)

But what if the owner information was wrong? Or what if that person had moved on? For those, we created a new process: the Scream Test. (Bwahahahahaaaa!)

What’s the Scream Test? Well, in our case it was a multistep process:

  1. Display the message “Hey, is this your server, contact us?” on the sign-in splash page for two weeks.
  2. Restart the server once each day for two weeks to see whether someone opens a ticket (in other words, screams).
  3. Shut down the server for two weeks and see whether someone opens a ticket. (Again, whether they scream.)
  4. Retire the server, retaining the storage for a period, just in case.

With this effort, we were able to retire far more unused servers—around 15 percent—than we had expected, without worrying about moving them to the cloud. Winning! We also were able to reclaim more resources on some of the Hyper-V hosts that were slated to continue running on-premises. And as a final benefit, we cleaned up our CMDB a bit!

In parallel, we initiated an effort to look at some of the systems that were infrequently used or used a very low level of CPU (less than 10 percent, or “Cold”). From that, we had two outcomes that proved critical for our successful migration to the cloud.

The first was to identify the systems in our on-premises environments that were oversized. People had purchased physical machines or sized virtual machines according to what they thought the load would be, and either that estimate was incorrect or the load diminished over time. We took this data and created a set of recommended Azure VM sizes for every on-premises system to use for migration. In other words, we downsized on the way to the cloud versus after the fact.

At the time, we did a bunch of this work by hand, manually because we were early adopters. Microsoft now has a number of great products available that help assist with this inventory and review of your on-premises environment that you should check out. To learn more, check out this article with documentation on Azure Migrate.

Another statistic that the data revealed was the number of systems that were used for only a few days or a week out of each month. Development machines, test/QA machines, and user acceptance testing machines reserved for final verification before moving code to production were used for only short periods. The machines were on continuously in the datacenter, mind you, but they were actually being used for only short periods each month.

For these, we investigated ways to have those systems running only when required by investing in two technologies: Azure Resource Manager Templates and Azure Automation. But this is a story for the next time. Until then, happy Halloween!

Related links

Read the rest of the series on Microsoft’s move to the cloud:

The post Microsoft uses a scream test to silence its unused servers appeared first on Inside Track Blog.

]]>
4908
Mapping Microsoft’s expedition to the cloud with good cartography http://approjects.co.za/?big=insidetrack/blog/azure-mapping-the-journey-to-the-cloud-begins-with-good-cartography/ Thu, 09 Nov 2023 17:08:52 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=3935 When you’re charged with mapping Microsoft’s expedition to the cloud, sometimes it’s best to go back to the basics—like using an old fashioned map to help you find your way. Check out Pete Apple’s expedition to the cloud series The learnings, pitfalls, and compromises of Microsoft’s expedition to the cloud Managing Microsoft Azure solutions on […]

The post Mapping Microsoft’s expedition to the cloud with good cartography appeared first on Inside Track Blog.

]]>
Microsoft Digital PerspectivesWhen you’re charged with mapping Microsoft’s expedition to the cloud, sometimes it’s best to go back to the basics—like using an old fashioned map to help you find your way.

An explorer in his own space, famous British film director Peter Greenaway once noted the significance of maps and cartography. “A map tells you where you’ve been, where you are, and where you’re going,” he noted in fascination. “In a sense, it’s three tenses in one.”

As a pioneer myself (in digital transformation), I couldn’t agree more. In my last blog post, I shared with you the awesomely ugly truth about how we decentralized operations at Microsoft and the intricacies and nuances we experienced as we adopted the Microsoft Azure DevOps model.

In talking to many of our customers, I know some of you are just starting out on your own cloud computing journey. So, let’s go back in time to the very beginning and share what happened the exact moment when our leadership gave the orders to start on Microsoft’s expedition to the cloud.

Microsoft Digital, our IT organization, is split into horizontal services (compute, storage, network, security) and vertical Line of Business (LOB) teams that provide solutions to our internal end users (Finance, HR, etc.). As the horizontal, our job is to ensure that our application teams have the appropriate computing systems and that those assets are tracked for cost and inventory purposes.

For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=jMGmL0B-4YQ, select the “More actions” button (three dots icon) below the video, and then select “Show transcript.”

Pete Apple unpacks Microsoft’s journey to the cloud via Microsoft Azure and his advice for optimization and change management.

When we got the announcement from management to start moving assets to the cloud, we simply did not know where to begin. Our first thought was to grab some “low hanging fruit” by targeting servers going out of service. We took a hard look at our physical and virtual inventory and soon realized that we weren’t even sure what was there.

One of the very first lessons we learned was that you can’t understand what applications you need to move if you don’t know what applications you have.

—Pete Apple, cloud services engineer, Microsoft Digital

Apple smiles as he reads an unfolded traditional map. He’s wearing an explorer’s hat and shorts and a short-sleeved shirt.
Pete Apple shares a lighthearted moment illustrating the importance mapping played in driving Microsoft’s expedition to the cloud. Apple is a cloud services engineer in Microsoft Digital. (Photo by Jim Adams | Inside Track)

My team took this opportunity to evaluate our inventory processes and assess how inaccurate our Configuration Management Database (CMDB) was—very! We found systems in datacenters that didn’t have any records. We found records in the CMDB for systems that no longer existed. Cleaning this up became someone’s part-time job (when it really could have been a full-time one).

One of the very first lessons we learned was that you can’t understand what applications you need to move if you don’t know what applications you have.

To move forward, we broke down the inventory effort by vertical organization and partnered a representative from each LOB to a designated person from our team. With the help of Microsoft Azure Service Map, we were able to scan each LOB application, identify what systems each LOB used and what other applications they relied upon to build a more robust dependency map.

This is an important step to take because, as you move applications into the cloud, systems that are next to each other in on-premises datacenters might end up in two different Microsoft Azure datacenters, creating an unexpected latency the team might not have accounted for in the design. Understanding this relationship ahead of time will help you factor which Microsoft Azure datacenter applications should go into and diminish the delay.

A good example of this is when we moved a financial database that many other applications depended upon. If we moved that critical application’s servers into the Microsoft Azure US West region, we wanted to ensure the dependent applications would end up there too, or otherwise, consider the possibility of change latency for calls to that data. Similarly, if the critical database had a disaster recovery setup to the US East region, it just made sense to map the dependent applications to that same region for disaster recovery.

With this approach, we were able to begin our “cloud cartography plans” and map our inventory in on-premises datacenters and plan their final destinations for migrating into Microsoft Azure. We now knew where they had been, where they were right now, and where they needed to go!

And then…during the cartography process we discovered an interesting fact. Maybe we didn’t need to move as much as we originally thought? More on that next time…

Read the rest of Microsoft’s move to the cloud series:

The post Mapping Microsoft’s expedition to the cloud with good cartography appeared first on Inside Track Blog.

]]>
3935
Managing Microsoft Azure solutions on Microsoft’s expedition to the cloud http://approjects.co.za/?big=insidetrack/blog/managing-microsoft-azure-solutions-on-microsofts-expedition-to-the-cloud/ Wed, 08 Nov 2023 16:16:11 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=6765 A very popular cliché used in Silicon Valley, the notion of having to “ship it and fix it and ship it again,” was all too familiar to my team as we focused our efforts on moving, managing, and monitoring solutions in Microsoft’s expedition to the cloud. Hello again and welcome back to our blog series […]

The post Managing Microsoft Azure solutions on Microsoft’s expedition to the cloud appeared first on Inside Track Blog.

]]>
Microsoft Digital PerspectivesA very popular cliché used in Silicon Valley, the notion of having to “ship it and fix it and ship it again,” was all too familiar to my team as we focused our efforts on moving, managing, and monitoring solutions in Microsoft’s expedition to the cloud.

Hello again and welcome back to our blog series on how our team helped Microsoft move most of Microsoft’s internal workloads to the cloud and Microsoft Azure. My team in Microsoft Digital, the organization that powers, protects, and transforms Microsoft, is the primary horizontal infrastructure group and we’re responsible for ensuring our internal customers have servers, storage, and databases, all the hard-crunchy bits of hosting, to run the critical applications that make the company operate internally.

It became clear we were going to have to hybridize our management solution if we were going to get Microsoft’s expedition to the cloud right.

– Pete Apple, cloud services engineer, Microsoft Digital

In this blog post I want to share what it took for us to effectively migrate solutions from on-premises to the cloud while managing and monitoring them for day-to-day operations. Go here to read the first blog in our series: The learnings, pitfalls, and compromises of Microsoft’s expedition to the cloud.

When I was running the hosting environment on-premises, our physical and virtual machine (VM) footprint was spread across multiple geographic datacenters, in two primary security zones—“corporate” and “DMZ.” Corporate refers to our internally facing services that our own employees use day to day for their jobs, while the DMZ holds our partner facing services that interact with the outside world. You might have a similar environment.

We used Microsoft System Center Operations Manager (SCOM) for monitoring and Microsoft System Center Configuration Manager (SCCM) for patching (this set of tools has been combined into Microsoft Endpoint Configuration Manager). As we started to look at moving solutions over to Microsoft Azure, it became clear we were going to have to hybridize our management solution if we were going to get Microsoft’s expedition to the cloud right.

Microsoft Azure ExpressRoute allowed us to “lift and shift” many of our on-premises VMs to the cloud as-is, which allowed us to operate them unchanged without disrupting our users. As more and more hosts moved from on-premises into Microsoft Azure, we eventually did a lift and shift on the Microsoft System Center servers themselves, so they were also operating out of a Microsoft Azure datacenter. Fair warning—there’s a tipping point when you get over 50 percent into the cloud based on the size of your environment and how quickly you’re moving VMs into the cloud, so think about it ahead of time.

Along the way, we learned that, in many cases, a cloud transition coincides nicely with shifting your application team to a DevOps model of deployment and management. We realized this early, which allowed us to change our technology and site reliability engineering practices in unison. For the DMZ and other internet-facing solutions, there were other options. We made sure our VMs in our internet-facing environment were within Microsoft Azure Update Management, so they stayed up to date and monitored.

Pete Apple sits at table.
Driving Microsoft’s expedition to the cloud has taught Apple many lessons that he’s happy to share with customers. (Photo by Jim Adams | Inside Track)

For teams looking to move to a modern cloud solution like PaaS or SaaS, we encourage other options rather than trying to duplicate past solutions. If an application was being refactored into a cloud native service without an operating system (and thus a SCOM/SCCM agent), we used modern monitoring solutions like Microsoft Azure Application Insights and Microsoft Azure Monitoring.

When I look back at Microsoft’s expedition to the cloud, it’s clear that we built the plane while flying it.

The evolution of moving to the cloud

Today, we in Microsoft Digital—Microsoft’s IT division—still operate a small System Center Endpoint Confirmation Management environment in corporate, which some teams continue to use for on-premises resources. All our Microsoft Azure resources have shifted to Azure native management, like Azure Monitor and Azure Update Management.

We had to learn to be flexible about management solutions because there are more options than just the simple “OS patch/monitor” world that we lived with for years.

– Pete Apple, cloud services engineer, Microsoft Digital

Pete Apple tosses a paper airplane.
Moving to the cloud can feel like you’re “building the plane while you fly it,” so it’s critical that you get your management and monitoring right before you get started, says Pete Apple, a cloud services engineer in Microsoft Digital. (Photo by Jim Adams | Inside Track)

One pivotal lesson we learned early on was to share best practices across both our team and the company—that way no one had to make the same mistake twice. This helped us make sure we used the most current monitor solutions and thinking each time we deployed a new application. For example, when one team started using Azure for management we were able to share out what they learned, including using its update management and log analytics features to improve their operations.

Additionally, once we became a hybrid operation, we had to learn to be flexible about management solutions because there are more options than just the simple “OS patch/monitor” world that we lived with for years. This transition also changed the way we handle traditional information technology infrastructure library (ITIL) change and incident management—a new set of challenges as we trekked further into the cloud, which I’ll go into next time.

Related links

The post Managing Microsoft Azure solutions on Microsoft’s expedition to the cloud appeared first on Inside Track Blog.

]]>
6765
Automating Microsoft Azure incident and change management on Microsoft’s move to the cloud http://approjects.co.za/?big=insidetrack/blog/automating-microsoft-azure-incident-and-change-management-on-microsofts-move-to-the-cloud/ Wed, 08 Nov 2023 16:00:52 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=3484 Microsoft’s move to the cloud at Microsoft has certainly been an adventure. New technology has enabled us to transform many of our IT processes, and in some cases make them entirely disappear. It’s also compelled us to reevaluate our operational health and ability to stay on pace with evolving operational functions such as monitoring and […]

The post Automating Microsoft Azure incident and change management on Microsoft’s move to the cloud appeared first on Inside Track Blog.

]]>
Microsoft Digital PerspectivesMicrosoft’s move to the cloud at Microsoft has certainly been an adventure.

New technology has enabled us to transform many of our IT processes, and in some cases make them entirely disappear. It’s also compelled us to reevaluate our operational health and ability to stay on pace with evolving operational functions such as monitoring and patching, architectures, and change management.

As we’ve moved to the cloud, we have been focusing on aligning the company’s IT services with the needs of the business under an operational model formally known as Information Technology Infrastructure Library (ITIL).

Historically, we would create one- to two-year architectures and be fine! Now, we’re evaluating exciting new features at least on a quarterly basis. Our team has had to learn to be agile—both literally and metaphorically.

– Pete Apple, cloud services engineer, Microsoft Digital

You may be surprised (and perhaps a bit relieved) to learn that, from the point of view of a services engineer, our design and management functions have probably evolved the least on Microsoft’s move to the cloud. There’s certainly new technology to understand and incorporate into our architectural designs, but the team doing that work has basically remained the same. It’s been a great opportunity to learn about Microsoft Azure and how it handles compute, storage, data, and networks.

One thing that has certainly kept us on our toes has been the ever-evolving architectural changes that happen in the cloud. The Microsoft Azure team releases new features at more frequent intervals versus the traditional releases of the past. Historically, we would create one- to two-year architectures and be fine! Now, we’re evaluating exciting new features at least on a quarterly basis. Our team has had to learn to be agile—both literally and metaphorically (referencing the Agile methodology).

Microsoft Azure enabled our operations to evolve and become more productive, with a faster service turnaround time. A good example is our change management discipline.

Over four years ago, we had many standard change requests from our internal customers. I was running the private cloud at the time, and you can imagine the number and variety of requests that came across my desk: “Create a virtual machine,” “Install SQL,” “Rebuild the operating system,” and so on. Each request was a change record in our system that was immediately assigned to a system engineer to do the work with a pressing service-level agreement (SLA) of 72 hours.

Sound familiar?

As we trekked further on Microsoft’s move to the cloud, we took a hard look at every change type in the internal catalog and automated everything that could be automated.

We reviewed the number and variety of change orders coming through and realized that with some scripting advances, System Center Orchestrator, Azure Templates, and Azure Automation, we could start automating many of these change activities. This enabled us to cut back on human error, improve the SLA, and in many cases implement a self-service approach for internal customers to deploy themselves instead of waiting on my team to implement the change manually.

Today, Microsoft Azure services are enabling Microsoft internal teams to self-service their own changes and skip the dreaded “open a ticket” model.

On the incident side, we also found similar ways to be more efficient.

Automating incident and change management through optimized architecture may sound a bit scary, but it’s been a real benefit to our organization.

– Pete Apple, cloud services engineer, Microsoft Digital

Apple holds up and points to a Microsoft cloud brochure while smiling at the camera.
Pete Apple, cloud services engineer in Microsoft Digital, is driving Microsoft’s operational IT transformation with Microsoft Azure services. (Photo by Jim Adams | Inside Track)

As our Microsoft Azure migrations increased, we found that our customer application developers wanted to have direct access to their Azure subscriptions to do more rapid DevOps-type deployments. This meant in many cases that they were finding and discovering issues or incidents almost instantaneously. They didn’t need to have a central team fronting incident management as much as they used to.

In response, we transitioned our incident management into a hybrid model—where the application teams can choose to have Microsoft Azure Monitoring and Application Insights alerts sent directly to them, and infrastructure alerts and outages still get forwarded to our centralized team. This has increased the skills required for some of the application teams to handle service reliability activities themselves and improved time to resolution and bug fixes for those same teams. What we’ve maintained is our centralized “escalation management” function that can help manage a major incident (or in the new nomenclature, a “LiveSite”).

Automating incident and change management through optimized architecture may sound a bit scary, but it’s been a real benefit to our organization. Removing some of the overhead in change management has cut costs in some cases by 30 to 40 percent and increased the speed of results for customers. I used to have a 48- to 72-hour SLA for building out a customer virtual machine. Now customers can spin one up in Microsoft Azure themselves in under 30 minutes!

Enabling teams to choose to receive alerts and incidents directly into their Microsoft Azure DevOps teams and escalate to central IT only when required empowers them to resolve items that impact their business more rapidly.

Unleashing Microsoft Azure and incorporating cloud patterns into architecture designs can really save time and costs for change management efforts, while improving the SLA and customer experience. But what does it mean for subscriptions and service over time? Check back with us soon as we continue the “Operationalizing the cloud” blog series and share insights and learnings from Microsoft’s move to the cloud.

Learn how Microsoft Azure services help configure and automate operational tasks across a hybrid environment, use ARM template documentation for efficient management, and provide a framework to manage the next generation of business apps and infrastructure.

The post Automating Microsoft Azure incident and change management on Microsoft’s move to the cloud appeared first on Inside Track Blog.

]]>
3484
Jamming to a new tune: Transforming Microsoft’s printing infrastructure with Universal Print http://approjects.co.za/?big=insidetrack/blog/jamming-to-a-new-tune-transforming-microsofts-printing-infrastructure-with-universal-print/ Tue, 20 Jun 2023 16:15:28 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=6116 [Editor’s note: This content was written to highlight a particular event or moment in time. Although that moment has passed, we’re republishing it here so you can see what our thinking and experience was like at the time.] Most people don’t give much thought to printing. In the best-case scenario, you select a button and […]

The post Jamming to a new tune: Transforming Microsoft’s printing infrastructure with Universal Print appeared first on Inside Track Blog.

]]>
Microsoft Digital stories[Editor’s note: This content was written to highlight a particular event or moment in time. Although that moment has passed, we’re republishing it here so you can see what our thinking and experience was like at the time.]

Most people don’t give much thought to printing.

In the best-case scenario, you select a button and your paper comes out. Other times, you might have to fiddle with locating printers, driver installations, and of course, the occasional paper jam. There are good reasons why this most humble of office essentials is also a common symbol of office frustrations.

Kathren is standing in front of a vase of flowers, smiling in her home office.
Kathren Korsky leads Microsoft’s Universal Print rollout project, which is making print management easier for IT administrators like Korsky. (Photo by Kathren Korsky)

IT administrators like Kathren Korsky think about printers a lot more than most.

As a senior service engineering manager for End User Services at Microsoft, Korsky oversees their organization’s printing strategy and infrastructure. That means maintaining print servers, ensuring connectivity, managing security permissions, and staying on top of compatibility issues with a broad network of third-party hardware partners.

It also means dealing with the security risk printer servers create.

How do printers create such challenges?

Before, anyone who wanted to print in a Microsoft office had to connect to Microsoft’s corporate network. That meant giving them VPN access just so they could print something.

“Corpnet is a very precious corporate asset, and VPN access ends up being a security liability,” Korsky says. “We must eliminate our print service dependency on VPN to achieve our strategic Zero Trust goals.”

Adding to these acute pains were the everyday aches of Microsoft branch offices without corpnet connections at all, where employees were severely constrained when attempting to print to a shared printer, not to mention the maintenance and high energy costs that physical servers consume.

Then about four years ago, Microsoft Digital began migrating all of its internal servers to the cloud, a project that transitioned 95 percent of its physical servers to Microsoft Azure virtual machines (VMs).

[Learn how Microsoft used Azure to retire hundreds of physical branch-office servers. Find out how Microsoft enabled secure and compliant engineering with Azure DevOps. Unpack seamless and secure cloud printing with Universal Print.]

Connecting printers to the cloud

Korsky’s team joined that cloud migration, and over four years they reduced the company’s 320 on-premises print servers around the world to around 80 Microsoft Azure print server VMs. The team benefited from Microsoft Azure’s security and management capabilities while achieving a print server uptime improvement to nearly 100 percent.

Korsky says the 70 hours per month their team formerly spent patching servers has been reduced to seven.

While the move to Infrastructure as a Service (IaaS) delivered great benefits for the print service, that was not enough. The team needed a solution that could work completely in the public internet space and draw on the advantages of becoming a Platform as a Service (PaaS) approach, which was going to be the next step in the print service transformation.

Working together with Microsoft’s Azure + Edge Computing team, they experimented with a previous offering, Hybrid Cloud Print, but felt that more was needed to simplify the administrator’s experience.

Seeing an opportunity, Korsky and their team knew the moment was ripe for a major transformation that would not only greatly reduce their administrative overhead, but also eliminate those pesky corpnet dependencies while enabling public internet connectivity in a safe and secure way.

Working together, Microsoft Digital and Azure + Edge Computing teams built in robust management capabilities and easily accessible data insights and reporting, and a new printing experience called Universal Print was born.

As Universal Print began to roll out to groups across Microsoft, beginning with the Azure + Edge Computing team, one of the challenges was the wide variety of different brands, makes, and models of printers that would need to integrate with the service.

“We as a product group wanted to support a broad set of currently available printers in market, and some of them are quite old,” says Jimmy Wu, a senior program manager for Azure + Edge Computing who worked with Korsky’s team to deploy Universal Print into the Microsoft infrastructure. “The challenge was how do we do that when our service isn’t even publicly available at the time.”

As a solution, they created a piece of connector software that served as a communication proxy between the physical printer and the cloud service. It’s now available to customers as part of their Universal Print subscription.

With the migration and product rollout complete, Universal Print was validated in private preview by Microsoft customers who also saw a need for a cloud print service. It then moved into public preview in July.

Printers are now being published in Microsoft Azure Active Directory through a centralized portal, with little need for on-premises infrastructure or maintenance.

What’s more, the elimination of on-premises servers and all the physical space, energy consumption and cooling systems that go with it help support Microsoft’s commitment to achieve carbon negativity by 2050.

For branch office managers grappling with whether to invest in costly corporate network setups, Korsky says, “it solves for some real business decisions that companies have to make about branch office locations.”

And the employee who just needs to print? They can think about it even less.

“What’s really great is that our users benefit from a seamless, familiar print experience,” Korsky says. Users click a button and their paper comes out—without all the interference of printer discovery, network permissions and driver installations standing in their way.

Universal Print in a remote world

The ability to print via the cloud has proven to be an unexpected boon to businesses and organizations who have had to quickly adapt to operating remotely.

Alan Meeus, a product marketing manager for Microsoft 365 Modern Work, says that of the more than 2,000 external customers currently testing Universal Print, many have accelerated their adoption amid COVID-19.

“Even with people working remotely, there are many use cases for why print is still important,” Meeus says. “There’s a lot of printing going on in critical industries like healthcare, manufacturing, distribution and education. In schools, some kids don’t have access to computers and they still rely a lot on printed materials.”

Universal Print has also helped enable Microsoft 365 users to perform work functions at home that they previously couldn’t.

“If our HR or payroll department needs to run checks, they can do that from home,” says Scott Hetherington, a senior systems analyst for the Wild Rose School Division in Alberta, Canada. “Being able to give them Universal Print right now has been a lifesaver. And it’s been able to help keep people safe in the face of a pandemic by keeping them home as much as possible.”

As more organizations ramp up adoption, the Universal Print team and their partners are looking forward to cultivating a circular feedback loop where they’re gathering feedback from the community and delivering the kinds of improvements customers want. They’re also working towards a longer-term vision of evolving from the IaaS cloud service model for the connector software to going completely serverless, requiring no infrastructure management at all.

For Korsky, it’s all about the growth mindset.

“This has been an amazing journey of experimentation to learn what works well and where changes are required. And we’re partnering in a more collaborative way,” Korsky says. “We took our learnings from Hybrid Cloud Print and came up with this whole new approach that is even better than we originally envisioned, and we’re having great success.”

The printing transformation is making a difference with Korsky’s peers across Microsoft.

“My team’s amazing partnerships with engineering teams across Microsoft allow us to develop impactful internal solutions that also benefit our customers,” says Dan Perkins, a principal service engineering manager in Microsoft Digital’s End User Services. “Universal Print simplifies how we manage our work and reduces the time we spend maintaining our infrastructure. It also improves the security of our print service. We are excited about what the future holds for this transformational offering.”

Related links

The post Jamming to a new tune: Transforming Microsoft’s printing infrastructure with Universal Print appeared first on Inside Track Blog.

]]>
6116
Designing a modern service architecture for the cloud http://approjects.co.za/?big=insidetrack/blog/designing-a-modern-service-architecture-for-the-cloud/ Wed, 22 Feb 2023 20:03:12 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=9770 The digital transformation that many enterprises are undertaking has its benefits and its challenges: while it brings new opportunities that add value to customers and help drive business, it also places demands on legacy infrastructure, making companies struggle to keep pace with the digital world’s ever-increasing speed of business. Consider an enterprise’s line-of-business (LOB) systems, […]

The post Designing a modern service architecture for the cloud appeared first on Inside Track Blog.

]]>
Microsoft Digital technical storiesThe digital transformation that many enterprises are undertaking has its benefits and its challenges: while it brings new opportunities that add value to customers and help drive business, it also places demands on legacy infrastructure, making companies struggle to keep pace with the digital world’s ever-increasing speed of business. Consider an enterprise’s line-of-business (LOB) systems, such as for finance in general, or procurement and payment in particular. These business-critical systems are traditionally based on-premises, can’t scale readily, and in many cases aren’t available to mobile devices.

As we continue along our digital transformation journey here at Microsoft, we have been looking to the cloud to reinvent how we do business, by streamlining our operations and adding value to our partners and customers. This technical blog post describes how our Microsoft Digital team saw our move to the cloud as an opportunity to completely rethink how we architect and run our core finance processes when they’re built on a modern architecture. Here, we discuss the thought processes and drivers behind the approach that we took to design a new service architecture for our Finance department’s Procure-to-Pay service.

Evolving from an apps-oriented to a services-focused architecture

Financial systems need to be secure by their nature. Moreover, their designs are typically influenced by an organizational culture that is understandably risk averse, so the concept of moving sensitive financial processes to the cloud can be especially challenging. The technical challenges are equally significant: Consider the transactional nature of financial systems, their real-time transactional data processing, auditing frequency and scale, and the numerous regulatory aspects that are associated with financial operations.

At Microsoft, many of our core business processes (such as procurement and payment) have traditionally been built around numerous monolithic, standalone apps. Each of these apps was siloed in its own on-premises environment, used its own copy of data, and presented one or more interfaces, often disconnected from each other. Without a unifying, overarching strategy, each of these apps evolved independently on an ad hoc basis, updating as circumstances required without considering impacts on other parts of the Procure-to-Pay process.

These complex and unwieldly apps required significant resources to maintain, and their redundant data led to inconsistent key performance indicators (KPIs) that were based on different underlying data sets. Furthermore, the user experience suffered because there wasn’t a single end-to-end process for Procure-to-Pay. Instead, people had to work within several different apps—each with its own interface—to complete a task, forcing users to learn to navigate through many different user experiences as they attempted to complete each step. The overall process was made even more cumbersome because people still had to complete manual steps in between certain apps. This in turn slowed completion of every Procure-to-Pay instance and was expensive to maintain.

At Microsoft Digital, our ongoing efforts to shift services to the cloud gave our Microsoft Finance Engineering team an opportunity to completely rethink how to approach Procure-to-Pay by designing a cloud-based, services-oriented architecture for the Finance department’s procurement and payment processes. This, modern cloud-based service, known as Procure-to-Pay, would focus on the end-to-end user experience and would replace the app-centric view of the legacy on-premises systems. Additionally, the cloud-based service would utilize Microsoft Azure’s inherent efficiencies to reduce capital expenditure costs, scale dynamically, and promote referencing of certified master data instead of copying data sets as the legacy apps did.

In this part of the case study, we describe some key principles that we followed when designing our new service-based architecture, and then provide more insight into the architecture’s data, API, and UI.

[Learn how DevOps is sending engineering practices up in smoke. Get more Microsoft Azure architecture guidance from us.]

Principles of a service-based architecture

We started this initiative by defining the key principles that would guide our approach to this new architectural design. These principles included:

  • Focus on the end-to-end experience by creating an overarching user experience (UX) layer that developers can use to connect different services and present a unified user experience.
  • Design as cloud first, mobile first to gain the cost and scalability benefits associated with cloud-based services, and to improve end user productivity.
  • Maintain single master copies of data with designated data owners to ensure quality while reducing redundancy.
  • Develop with efficiency and cost-effectiveness at every step to reduce Microsoft Azure-based compute time costs.
  • Decouple UI from business functionality by defining separate layers for UI, business functionality, and data storage within each service.
  • Utilize flighting with early adopters and other participants to reduce change-management risk.
  • Automate as much as possible, identifying the manual steps that users had to take when working with the old on-premises apps and determining how to automate them as part of the new end-to-end user experience.

In the next few sections, we provide additional insights into how we applied these principles from UI, data, and API perspectives as we designed our new architectural model and used it to build our Procure-to-Pay service.

Emphasizing a holistic, end-to-end user experience

When we surveyed our legacy set of on-premises apps, we discovered a significant overlap of functionality between them. Our approach with the new architectural model was to break down the complete feature set within each app to separate core functionality from duplicated features.

We used this information to consolidate the 36 standalone legacy apps into an architecture that comprises 16 discrete services, each with a unique set of functionality, presentation layer, APIs, and master data. On top of these 16 unique services, we defined an overarching End-to-End User Experience layer that developers can use to create a singular, unified experience that can span numerous services.

As the graphic below illustrates, our modern architecture utilizes a modular approach to services that promotes interconnectivity. Because users interact with the services at the End-to-End User Experience layer, they experience a consistent and unified sequence of events in a single interface. Behind the scenes, developers can connect APIs in one service to another to access the functionality they require, or transparently pass the user from one service’s UI to another as needed to complete the Procure-to-Pay process.

Illustration of the Microsoft Finance department's new cloud-based architecture, known as Procure-to-Pay, with 16 vertical services.
Our modern architecture model defines 16 unique vertical services, each with its associated UI, API, and data layers. It also provides an overarching user experience layer that developers can use to connect any combination of the services into a seamless, end-to-end user experience.

Another critical aspect of providing an end-to-end experience is automating the front- and back-office operations (such as support) as much as possible. To support this automation, our architecture incorporates a Procure-to-Pay Support layer underneath all the services. Developers can integrate support bots into their Procure-to-Pay services to monitor user activity and proactively offer guidance when deemed appropriate. Moreover, if the support bot can’t quickly resolve the issue, it will silently escalate to a human supervisor who can interact with the user within the same support window. Our objective is to make the support experience so seamless that users don’t recognize when they are interacting with a bot vs. a support engineer.

All these connections and data referencing are hidden from the user, resulting in a seamless experience that can be expressed as a portal, a mobile app, or even as a bot.

Consolidating data to support end-to-end experiences

One ongoing challenge that we experienced with our siloed on-premises apps was how each app utilized its own copy of data, resulting in wasted storage space and inconsistent analytics due to the variances between data sets. In contrast, the new architectural data model had to align with our principle of maintaining single, master copies of data that any service could reference. This required forming a new Finance data lake to store all the data.

The decision to create a data lake required a completely new mindset. We decided to shift away from the traditional approach, in which we needed to understand the nature of each data element and how it would be implemented in a solution. Today, our strategy is to place all data into a single repository where it can be available for any potential use—even when the data has no current apparent utility. This approach recognizes the inherent value of data without having to map each data piece to an individual customer’s requirements. Moreover, having a large pool of readily available, certified data was precisely what we needed to support our machine learning (ML) and AI-based discovery and experimentation—processes that require large amounts of quality data that had been unavailable in the old siloed systems.

After we formed the Finance data lake, we defined a layer in our architecture to support different types of data access:

  • Hot access is provided through the API layer (described later in this case study) for transactional and other situations that require near real-time access to data.
  • Cold/warm access is used for archival data that is one hour old or older, such as for machine learning or running analytics reports. This is a hybrid model, where we can access data that is as close to live status as possible without accessing the transaction table, but also perform analytics on top of the most recent cold data.

By offering these different types of access, our new architectural model streamlines how people can connect data sources from different places and for different use scenarios.

Designing enterprise services in an API economy

In the older on-premises apps, the tight coupling of UI and functionality forced users to go through each app’s UI just to access the data. This type of design provided a very poor and disjointed user experience because people had to navigate many different tools with different interfaces to complete their Procure-to-Pay task.

One of the most significant changes that we made to business functionality in our new architectural model was to completely decouple business functionality from UI. As Figure 1 illustrates, our new architectural model has clearly defined layers that place all business functionality in a service’s API layer. This core functionality is further broken down into very small services that perform specific and unique functions; we call these microservices.

With this approach, any microservice within one service can be called by other services as required. For example, a link-validation microservice can be used to verify employee, partner, or supplier banking details. We also recognized the importance of making these microservices easily discoverable, so we took an open-source approach and published details on Swagger about each microservice. Internal developers can search for internal APIs for reuse, and external developers can search for public APIs.

As an example, the below image illustrates the usage scenario for buying a laptop, where the requester works through the unified User Experience layer. What is hidden to the user is how multiple services including Catalog Management, Purchase Experience, and Purchase Order interact as needed to pass data and hand off the user transparently from service to service to complete the Procure-to-Pay task.

An illustration depicting how the Microsoft Finance department's Procure-to-Pay service works for a user buying a laptop.
An example usage scenario for buying a laptop, illustrating how the person requesting a new computer works through the unified End-to-End User Experience layer while multiple services work transparently in the background to complete the end-to-end Procure-to-Pay task.

When defining our modern architecture, we wanted to minimize the risk that an update to microservice code might impact end-to-end service functionality. To achieve this, we defined service contracts that map to each API, and how the data interfaces with that API. In other words, all business functionality within the service must conform to the contract’s terms. This allows developers to stub a service with representative behaviors and payloads that other teams can consume while the service code is being updated. Provided the updates are compliant with the contract, the changes to the code won’t break the service.

Finally, our new cloud-based modern architecture gave us an opportunity to improve the user experience by specifying a single sign-on (SSO) event throughout the day, irrespective of how many services a user touches during that time. The key to supporting SSO was to leverage the authentication and authorization processes and protocols that are built into Microsoft Azure Active Directory.

Benefits

Following are some of the key benefits that our Microsoft Digital team is experiencing by building our Procure-to-Pay service on our modern cloud-based architecture.

  • Vastly improved user experience. The new Procure-to-Pay service has streamlined the procurement and payment process, providing a single, end-to-end user experience with a single sign-on event that replaces 36 legacy apps and automates many steps that used to require manual input. In internal surveys, employees are reporting a significant improvement in satisfaction scores across the enterprise: users are happier working with the new service, engineers can more easily troubleshoot issues, and feature updates can be implemented in days instead of months.
  • Better compliance. We now have full governance over how our data is being accessed and distributed. The shift to a single Finance data lake with single copies of certified master data and clear ownership of that data, ensures that all processes are accessing the highest-quality data—and that the people accessing that data are authorized to do so.
  • Better insights. Now that our KPIs are all based on the certified master data, we’ve improved our analytics accuracy by ensuring that all analysis is based on the same master data sets. This in turn enables us to ask the big questions of our collective data, to gain insights and help the business make appropriate data-driven decisions.
  • On-demand scaling. The natural rhythm of Finance operations imposes high demand during quarterly and annual report periods, while requiring fewer resources at other times. Because our architecture is based in the cloud, we utilize Microsoft Azure’s native ability to dynamically scale up to support peaks in processing and throttle processing resources when demand is low.
  • Significant cost and resource savings. Building our new Procure-to-Pay service on a modern, cloud-based architecture is resulting in cost and resource savings through the following mechanisms:
    • Decommissioned physical on-premises servers: We’ve decommissioned the expensive, high-end physical and virtual servers that used to run the 36 on-premises apps and replaced them with our cloud-based Procure-to-Pay service. This has reduced our on-premises virtual machine footprint by 80 percent.
    • Reduced code maintenance costs: In addition to decommissioning the on-premises apps’ servers, we no longer need to spend significant development time maintaining all the brittle custom code in the old siloed apps.
    • Drastic reduction of compute charges: Our cloud-based Procure-to-Pay service has several UIs that can be parked and stored very cost effectively as BLOBs until the UIs are needed. This completely avoids any compute-based charges until a UI is required and is then launched on demand.
    • Reduction in support demand: Our bot-driven self-serve model automatically resolves many of our users’ basic support issues, freeing up our support engineers to focus on more critical issues. We estimate a 20 percent reduction in run cost by decommissioning our Level 3 support line, and a 40 percent reduction in overall Procure-to-Pay related support tickets.
    • Better utilization of computing resources: Our old on-premises apps incurred huge capital expenditure costs when purchasing their high-end hardware and licenses for servers such as Microsoft SQL Server. With a planning and implementation period that might take months, machines were typically overbuilt and underutilized because we would plan for an approximate 10 times capacity to account for growth. Later, the excess capacity wouldn’t be sufficient, and we would have to repeat this process to purchase newer hardware with even greater capacity. The new architecture has eliminated capital expenditures for Procure-to-Pay, favoring the more efficient, scalable, and cost-effective Microsoft Azure cloud environment. We’re also utilizing our data storage more efficiently. It is less costly to store data in the cloud, and storing a single master copy of data in our Finance data lake removes all the separate copies of the same data that each legacy app would maintain.
    • Better allocation of personnel: Previously, our Engineering team had to review the back-end systems and build queries to cater to each team’s needs. Consolidating all data to the Finance data lake in our new system enables people to create their own Microsoft Power BI reports on top of the data, modify their analyses to form new questions, and derive insights that might not have appeared otherwise. As a result, our engineering resources can be reallocated to support more strategic functions.
  • Simplified testing and maintenance. We use Microsoft Azure’s out-of-the-box synthetics to test each function within our microservices programmatically, which is a much easier and more comprehensive approach than physically testing each monolithic app in a reactive state to assess its health. Similarly, Azure’s service clusters greatly streamline our maintenance efforts, because we can deploy many instances of different services to achieve a higher density. Moreover, we now utilize a single cluster for all our preproduction environments. We no longer need to maintain separate development, system test, staging, and production environments.

Key Takeaways
We on the Microsoft Digital team learned some valuable best practices as we designed our modern cloud-based architecture:

  • Achieving a modern architecture starts with asking the big questions: Making the shift from large, unwieldly standalone on-premises apps to a modern, cloud-based services architecture requires some up-front planning. Assemble the appropriate group of stakeholders and gain consensus on the following questions: What type of architecture do we want? Where do we want to have global access to resources? What types of data should be stored locally, and under what circumstances? When and how do we programmatically access data that we don’t own to mitigate, minimize, or entirely remove data duplication? How can we ensure what we’re building is the most efficient and cost-effective solution?
  • Identify where your on-premises apps are in their lifecycle when deciding whether to “lift-and-shift”: If you’re dealing with an app or service that is nearing its sunset phase and you only need to place it into the cloud for a short period while you transition to something newer, consider the “lift-and-shift” approach where your primary objective is to run the exact same system in the cloud. For systems that are expected to have a longer lifecycle, you’ll reap greater rewards by rethinking your service architecture with a platform as a service (PaaS) mindset from the start.
  • Design your architecture for engineering rigor and agility. Look for longer-term value based on strategic planning to make the most of your transition to the cloud. At Microsoft, this was the key determination that guided our new architecture’s development: Reimagine how our core processes can be run when they’re built on a modern service architecture. For us, this included being mobile first and cloud first, and shifting from waterfall designs to adopting agile practices. It also entailed making security a first thought in architectural design instead of a last thought, and designing the continuous integration/continuous deployment (CI/CD) pipeline.
  • Keep cost efficiency in mind. From the very first line of code, everyone involved in developing your new services should strive to make each component as efficient and cost effective as possible. At Microsoft, this development principle is why we mandated a serverless compute model with no static environments that supported “parking” inactive code or UI inside BLOBs when they weren’t needed. This efficiency is also a key reasoning behind our adopting Microsoft Azure resource groups to minimize the effort required to switch between stage and production environments.
  • Put everything in into your data lake. Cloud-based storage is inexpensive. When organizations look to the cloud as their primary storage solution, they no longer need to expend effort collecting only the data that they think everyone wants—especially because, in reality, everyone wants something different. At Microsoft, by creating the Finance data lake and shifting our mindset to store all master data there, irrespective of its anticipated use, we eliminated the resources we would traditionally spend to analyze each team’s data requirements. Today, we focus on identifying data owners and certifying the data. We can then address the data of interest when a customer makes a specific request.
  • Incorporate telemetry into your architecture to derive better insights from your data. Your data-driven decisions are only as good as your data. In our old procurement and payment system at Microsoft, we didn’t know who was using the old data and for what reasons, or even how much it was costing us. With the new Procure-to-Pay service based on our modern architecture, we have telemetry capabilities inside everything we build. This helps with service health monitoring. We also incorporate this information into our feature and service decision-making processes as we continually improve Procure-to-Pay.
  • Promote your new architectural model to gain adoption. You can define a new architectural design, but if you don’t promote it in a way that demonstrates its value, developers will hesitate to use it. At Microsoft, we published details about how developers could tap into this new architecture to create more intuitive and user-friendly end-to-end experiences that catered to their end users. This internal open-source approach creates a collaborative environment that encourages developers to join in and access the data they need, and then apply it to their own end-to-end user experience wrapper.

At Microsoft, rethinking our approach to services with this cloud-based modern architecture is helping us become a data-driven organization. By consolidating our data into a single data lake and providing an API layer that enables rapid development of end-to-end procurement and payment services, we’ve created a self-serve platform where anyone can consume the certified data and present it in a seamless, end-to-end manner to the user, who can then derive insights and make data-driven decisions.

Our next steps

The Procure-to-Pay service is just one cloud-based service that we built on top of our modern architecture. We’re continuing to mature this service, but we’re also exploring additional end-to-end services that can benefit other Finance processes to the same extent that Procure-to-Pay has modernized procurement and payment.

This new model doesn’t have to be restricted to Finance; our approach has the potential to benefit the entire company. The guiding principles we followed to define our Finance architecture align closely with our leadership’s digital transformation vision. That is why we’re also discussing how we might help other departments outside Finance adopt the same architectural model, build their own end-to-end user experiences, and reap similar rewards.

Related links

The post Designing a modern service architecture for the cloud appeared first on Inside Track Blog.

]]>
9770
Powering Microsoft’s operations transformation with Microsoft Azure http://approjects.co.za/?big=insidetrack/blog/powering-microsofts-operations-transformation-with-microsoft-azure/ Thu, 10 Nov 2022 19:07:14 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=8963 In any digital transformation, technology and culture changes go together, and our ongoing operations transformation here at Microsoft is no different. As a company, we have evolved from using a process-centered, rigid, manual operations model with a disconnected customer experience. We moved to a Microsoft Azure-based model that uses modern engineering principles such as scalability, […]

The post Powering Microsoft’s operations transformation with Microsoft Azure appeared first on Inside Track Blog.

]]>
Microsoft Digital technical storiesIn any digital transformation, technology and culture changes go together, and our ongoing operations transformation here at Microsoft is no different.

As a company, we have evolved from using a process-centered, rigid, manual operations model with a disconnected customer experience. We moved to a Microsoft Azure-based model that uses modern engineering principles such as scalability, agility, and self-service that are focused on the customer experience.

Our Microsoft Digital Employee Experience (MDEE) team is leading the company on a bold, three-step strategy to build best-in-class platforms and productivity services for the mobile-first, cloud-first world. This strategy harmonizes the interests of users, developers, and IT.

To effectively deliver on the strategy, we needed to rethink our infrastructure and operations platforms, tools, engineering methods, and business processes to create a collaborative organization that can deliver cohesive and scalable solutions.

[Explore instrumenting ServiceNow with Azure Monitor. | Discover modernizing enterprise integration services using Azure. | Unpack implementing Azure cost optimization for the enterprise.]

Our operations history

Like most IT organizations, our traditional hosting services were mostly physical, on-premises environments that consisted of servers, storage, and network devices. Most of the devices were owned and maintained for specific business functions. The technologies were very diverse and needed specialized skills to design, deploy, and run.

Traditional IT technologies, processes, and teams

Server technologies included discrete servers and densely built computing racks with blade servers. Storage technologies used direct-attached storage (DAS) and storage area networks (SANs). Networks used a variety of technologies, from simple switches to more advanced load balancers, encryption, and firewall devices. Platform technologies ranged from Windows, SQL Server, BizTalk, and SharePoint farms to third-party solutions such as SAP and other information security–related tool sets. Server virtualization evolved from Hyper-V to System Center Virtual Machine Manager and System Center Orchestrator.

To provide a stable infrastructure, we needed a structured framework, such as IT Infrastructure Library/Managed Object Format (ITIL/MOF). Policies, processes, and procedures in the framework helped to enforce, control, and prevent failures. Engineering groups that used hosting services had a similar adoption process for their application and service needs, based on ITIL/MOF and combined with a synchronous data link control (SDLC)/waterfall framework.

Teams formed naturally around people with similar core strengths in the ITIL areas of service strategy, service design, service operations, and service transition, as shown in the graphic below.

Illustration of how teams naturally formed around people with similar strengths in key ITIL areas, including strategy, design, and more.
Traditional IT teams formed around the core of ITIL service areas.

Traditional hosted environments relied on external sources of space, power, connectivity, hardware, and software. And the technologies behind these sources evolved slowly. A common framework of policies and procedures helped bring teams together to refine and unify procedures. Tools were developed to formalize, track, audit, and measure procedures. The culture of the organization helped build a process-oriented, structured way of getting things done.

Challenges of traditional IT

Although ITIL/MOF helped streamline some processes, the complexities, constraints, and dependencies of traditional hosting prevented agile engineering. For example, it usually took six to nine months to build a new development environment for an application or service team. This time included planning, coordinating resources, tracking issues, and mitigating risk. Although the structure added clarity in delivery, it removed business agility.

Long-term managed services offered opportunities to build cost efficiency. But, because of the way processes were implemented, functional roles were often duplicated. This created an overall negative impact on time and cost.

When our engineering teams used SDLC waterfall methods and operations teams used ITIL/MOF, adhering to process took priority over delivering iterative, agile solutions to meet targeted business needs. These processes slowed business throughput significantly. Solutions were developed and deployed over years instead of months.

Phase 1: Improving operational efficiency

Our MDEE team plays a pivotal role in the company’s new strategy, as most business processes in the company depend on us. To help Microsoft transform, we identified key focus areas to improve in the first phase of our transformation: improving business agility, reducing costs, learning new skills, and inventing new ways to work.

The graphic below shows the steps we took to get to Microsoft Azure.

Illustration outlining key areas the MSEE team identified to help Microsoft transform its strategy and move to Microsoft Azure.
We moved toward our IT mission by transforming technology and customer service.

Infrastructure Platform. An agile business demands agile infrastructure, fewer physical servers, and moving to/innovating in Microsoft Azure.

Strategy. Migrating to the cloud highlighted the need for build, change, and policy management processes as self-service capabilities. Our approach is to use software to automate provisioning, management, and coordination of services, so our Microsoft business partners can develop and deploy services faster with less work and lower cost.

Structure. We had to rethink the way that our teams and roles delivered this strategy by integrating different teams that did similar tasks. This allowed us to effectively design and deliver end-to-end service offerings at lower cost. Our organization was restructured to form teams that optimize service and infrastructure. These teams learn new skills, work harmoniously with engineering, and reduce waste.

Culture. We embraced a growth mindset, learned new skills, built new capabilities, and found new ways to work.

Mission. It became our mission to define, deliver, and transform how we work by helping engineers build solutions tailored to the hybrid cloud world.

Realigning our organization

Services optimization. This team helps our business partners to provision and manage their own IT services. We have improved operational agility and reliability, which has resulted in specific benefits:

  • Less manual effort per release/update
  • Shorter lead time
  • More frequent builds and deployment
  • Increased service quality
  • Reduced security exposure

We elevated our teams by training people and hiring others with the engineering skills we need. Our goal is to gradually transition people from operational skills to service engineering skills.

A deeper analysis of our operational model also revealed redundant processes in service design, service transition, and service operations. After careful consideration, we reduced process overhead by eliminating or automating some processes. This restructuring presents a business opportunity to consolidate vendor teams. Many of our sustained workloads will decrease year over year, as on-premises infrastructure shrinks.

Infrastructure Optimization. This team eliminates duplicate infrastructure, reduces our footprint, and modernizes infrastructure for our business partners by reducing hosting costs. Key outcomes of this work include:

  • Consolidated datacenters
  • Fewer physical and traditional virtual machines
  • Smaller storage consumption
  • Increased cloud adoption

When teams started working together to optimize infrastructure, they found duplicate projects with similar goals. After we cut redundant projects, people were freed up to learn project management skills and to engage with our business partners.

This team took a program-based delivery approach with start and end dates. After provisioning was automated, we worked with our business partners so they could use new self-service tools to take ownership of their infrastructure. The new self-service features helped our business partners identify and decommission unused servers. Self-service planning eliminates manual handoffs, and enables our business partners to manage risks, issues, and blockers. Our business partners also found that they no longer needed vendors to manage hand-offs.

Reinventing our culture

To reinvent ourselves, we needed to change. We stopped managing processes and began trusting our business partners and empowering engineers. We defined our new mindset and goals to:

  • Focus on the customer by designing and building new services from their perspective.
  • Challenge and question the status quo, and rethink old processes and behaviors.
  • Experiment and learn so we can produce innovative cloud technologies using agile methods.
  • Collaborate beyond our organizational boundaries to identify and deliver the right solution for our business partners.
  • Deliver faster and fix issues faster.

The business outcome

Combined, all the changes we made produced tangible results. We improved our agility and enabled our Microsoft business partners to deploy services faster with less work at a reduced cost. We were able to:

  • Reduce manual work by about 60 percent.
  • Migrate 10 percent of the IT ecosystem to the public cloud (Azure IaaS).
  • Decommission on-premises data centers across the pre-production ecosystem.
  • Optimize about 42 percent of our global workforce.
  • Save about $6.5 million in organization operational costs.

Lessons learned in Phase 1

Through this process of technological and cultural evolution, we learned that:

  • Next-generation, modern applications will come from innovating in Microsoft Azure. A private cloud cannot provide the innovations and scale that Azure can.
  • There are a multitude of technical requirements to help our Microsoft business partners migrate to Microsoft Azure.
  • Tools that support the private cloud don’t scale for Microsoft Azure, which significantly impacts agility.
  • Processes established for a private cloud cause a fragmented and disconnected experience in Microsoft Azure.
  • Capability gaps to connect Microsoft Azure inventory, utilization, and cost led to drastic increase in Azure operational cost.

Phase 2: Delivering value through innovation

To effectively harness the benefits of Microsoft Azure, we migrated 90 percent of our IT infrastructure to Azure and then balanced the business need for innovation with efficient operation. We decided to use native cloud solutions, phase out customized IT tool sets, and decentralize and simplify operations processes as we adopt the DevOps model.

Changing roles

Microsoft Azure DevOps is a work model that integrates software developers and IT operations. As we move to the cloud, IT infrastructure support is drastically reduced. Going forward, we offer the most value to our business partners by adopting Infrastructure as Code to achieve friction-free interaction with engineering teams and support continuous deployment. We redefined operations roles and retrained people from traditional IT roles to be business relationship managers, engineering program managers, service engineers, and software engineers:

  • Business relationship managers engage with our Microsoft business partners to understand their needs and to tailor Microsoft Azure capabilities for their business needs. Business relationship managers listen, prioritize, and manage expectations across business, infrastructure, and Azure teams.
  • Engineering program managers design and deliver solutions in partnership with software engineers, service engineers, and business relationship managers.
  • Software and service engineers focus on developing reliable, scalable, and high-quality automated services, which eliminates much manual work. As we retrained people from operational to engineering and relational skills, we saw a gradual uptick in engagement with our business partners.

Simplifying operational processes

In the past, the processes that Microsoft used to manage corporate inventory, procurement, software development, security management, financial management—and other functions—were disconnected from each other and confined within organization boundaries. And existing processes and tools resulted in long wait times for simple IT tasks.

A simple application infrastructure took at least 40 days to provision, and complex applications with multiple dependencies could take over a year. The traditional IT mindset, processes, and obsolete tools had a negative impact on software engineering productivity. IT operations processes were realigned as shown in the graphic below.

Graphic outlining how IT operations processes were realigned to improve the timeline for both simple and complex apps with dependencies.
IT operations support for different stages of the development/deployment life cycle were realigned for Microsoft Azure.

Microsoft Azure radically simplified our IT operations. Simple projects can be provisioned in Azure within one day, and complex projects can be provisioned in six days. We increased our speed 40-fold by eliminating, streamlining, and connecting processes, and by aligning processes for Azure.

Adopting native cloud solutions

We are retiring many customized IT tools and focusing on native cloud solutions using Microsoft Azure Infrastructure as Code within the Microsoft Azure Resource Manager (ARM) fabric. By using ARM templates, APIs, and PowerShell (as well as integrating developer tools) we can rapidly provision a hosting platform.

We also adopted software-defined networking (SDN) by developing APIs to dynamically procure Microsoft Azure ExpressRoute load balancing and traffic managing capabilities, which connect, secure, and route traffic and improve application responsiveness. Microsoft Azure Site Recovery (ASR) is primarily used for lift-and-shift migration of virtual machines.

Microsoft Azure Operations Management Suite (OMS) is a Software as a Service (SaaS)-based, cross-platform solution with capabilities that span analytics, automation, configuration, security, backup, and disaster recovery. OMS is designed for speed, flexibility, and simplicity and effectively manages windows servers and Linux in a hybrid cloud environment.

The graphic below shows how native cloud solutions allow many traditional IT processes to become self-service.

Graphic showing how native cloud solutions allow many traditional IT processes to become self-service processes.
Traditional IT tasks and processes are now self-service native cloud solutions.

ICM is the Incident Management System for Microsoft. With high-availability cloud support, and cloud‑based access, we now support Microsoft Azure and many other services across Microsoft.

Cloud Cruiser, a third-party SaaS application, gives us valuable financial information and reports about our Microsoft Azure usage and spending in near-real time.

Using Cloud Cruiser, we can examine and aggregate financial data across multiple global Microsoft Azure subscriptions, which is crucial. Our Azure environment contains many subscriptions—Cloud Cruiser gives us the immediate visibility that’s required to manage and control costs.

Microsoft Azure Advisor is a personalized cloud consultant that helps us follow best practices to optimize our Microsoft Azure deployments. It analyzes our resource configuration and usage telemetry. It then recommends solutions to help improve the performance, security, and high availability of our resources while looking for opportunities to reduce our overall Azure costs.

Optimizing Microsoft Azure

With much of our cloud infrastructure in place, we recognized the need to optimize our Microsoft Azure resources. We created Microsoft Azure Resource Optimization (ARO), a combination of tools, processes, and education to help Microsoft teams examine both their total cost of cloud resources and the number of underutilized assets. The types of underutilized resources are evaluated to identify cost savings opportunities, such as IaaS virtual machines, Azure SQL databases, PaaS web and worker roles, Azure storage, virtual networks, and IPs.

Some examples of ARO recommendations include adjusting SKU sizes, deleting unused resources, or turning off resources during downtime. The overall ARO goal is to increase awareness of consumption, optimization, and cost of Microsoft Azure resources across Microsoft, to encourage engineers, managers, and leadership to adopt cost-effective behaviors. We deliver business intelligence to help people make key decisions about Azure usage, which will promote a culture of cloud optimization.

Modern teams

To implement our cloud-first transformation effectively and quickly, we formed engagement and program management teams to connect with our internal business partners, identify their needs, prioritize features, and deliver them with focused discipline. Individuals who can code Microsoft Azure infrastructure solutions as APIs, PowerShell scripts, and templates were united as software engineering teams. And we grouped all the manageability services under service engineering teams to provide reliable, available, and supportable services.

All other IT operations support teams were decentralized and integrated into application teams using the Microsoft Azure DevOps model to improve issue resolution time. Employees learned new skills, and we hired new people with needed skills. Assessing, refining, and hiring the right talent is part of organization hygiene.

Business outcomes

Accelerating our transformation to Microsoft Azure by changing roles, investing in new skills, and simplifying operations processes had four important benefits.

More productive workforce

  • IT ecosystem is 98 percent in Microsoft Azure (IaaS mostly).
  • We shifted to a self-service culture.
  • Microsoft Azure DevOps is in practice.

More agile business

  • Provisioning speed was increased 40-fold by simplifying operations processes and using native cloud solutions.

Reduced costs

  • Customized IT tools were reduced 60 percent.
  • CPU utilization increased 400 percent.
  • Annual cloud spending was reduced 38 percent.
  • On-premises IT datacenters and labs have been decommissioned across our production ecosystem.

Improved business partner experience

  • We have improved the user experience and engagement with our business partners. We have shared practices and lessons learned across our company and industry.

Lessons learned in Phase 2

To make our digital transformation to Azure a success, we had to:

  • Redesign strategic assets as Platform as a Service (PaaS) solutions.
  • Integrate engineering and manageability platforms.
  • Use data as a strategic asset.
  • Use predictive analytics and machine learning to prevent and remediate failures.

Phase 3: Embracing the digital ecosystem

Our ability to take advantage of emerging technologies and to embrace new business strategies will be a deciding factor in the modern era. Going forward, our MDEE teams are organized around end-to-end ownership of services that delight our business partners and that focus on innovation, co-creation, and collaboration.

Our first phase of transformation focused on migrating infrastructure and automating processes to drive efficiency and lower operations costs. The second phase was driven by adopting the Microsoft Azure platform, simplifying operations processes, and changing operations roles to invest in engineering, customer service, and native cloud solutions.

The next stage includes developing intelligent systems on Microsoft Azure to deliver reliable, scalable services and to connect operations processes across Microsoft. Bots will support basic user queries, while service reliability engineers strive to predict and remediate failures using predictive analytics and machine learning. Our focus is on operational resilience and cost avoidance. Several industry trends drive the continued evolution of our digital IT ecosystem:

  • DevOps culture accelerates engineering team deliverables and decisions using a boundary-free flow of information and frictionless processes.
  • Native cloud solutions offer an enterprise-level manageability platform that supports decentralized services and enables flexible, predictable, reliable response to changes with speed.
  • Data has become a durable asset. With the proliferation of cloud infrastructure, mobile applications, and IOT devices there are growing needs to store massive data and analyze it in near-real time to predict patterns, build models, and drive intelligent actions among end-user communities
  • Open source standards are increasingly supporting a platform for innovation, moving to the cloud, and enabling community governance at scale to balance the need for security with agility
  • MDEE as a services broker shifts our engineering focus from system design/build to assembly, configuration, and integration of specialized third-party software components. We can accelerate the time to value and reduce technical debt.

The graphic below shows how our digital transformation and move to the cloud will use automation, enhanced resiliency, predictive analytics, and bots to integrate business partner feedback and improve service to our business partners.

Illustration showing how our digital transformation and move to the cloud uses automation, enhanced resiliency, predictive analytics, and more.
A system of applications and platforms, combined with predictive analytics, is at the heart of our digital transformation.

We recognized that our business partners need hybrid cloud scale and economics by offering enterprise-level engineering and management platforms. We have embraced the industry trends of mobility, IOT, machine learning, AI, open source, and cross-platform standards.

Together, Microsoft Azure PaaS, Visual Studio Online, and AppInsights will enable engineers to focus on features and usability, while ARM fabric and OMS will provide a single pane of glass view to provision, manage, and decommission infrastructure resources securely. Only through optimizing the engineering and manageability process independently and in concert with each other can we achieve the digital transformation goals for Microsoft.

Key Takeaways
Our MDEE team plays an influential role in the digital transformation of the company. Our evolution and move to Microsoft Azure is anchored around the idea of building connected intelligence systems to transform how we engage with business partners, empower engineers, optimize operations, and reinvent products. Delivering excellence will drive the cultural change to modern practices.

With connected systems, simplified self-service provisioning, and a focus on our business partners, we can scale our infrastructure service offerings across the company and drive innovation, business agility, and productivity. In the process, we will also reduce costs and improve our operations resilience.

Related links

The post Powering Microsoft’s operations transformation with Microsoft Azure appeared first on Inside Track Blog.

]]>
8963
Streamlining Microsoft’s global customer call center system with Microsoft Azure http://approjects.co.za/?big=insidetrack/blog/streamlining-microsofts-global-customer-call-center-system-with-microsoft-azure/ Wed, 27 Jan 2021 21:21:15 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=6183 Overhauling the call management system Microsoft used to make 70 million calls per year has been a massive undertaking. The highly complex system was 20 years old and difficult to move on from when, five years ago, the company decided a transformation was needed. These phone calls are how Microsoft talks to its customers and […]

The post Streamlining Microsoft’s global customer call center system with Microsoft Azure appeared first on Inside Track Blog.

]]>
Microsoft Digital storiesOverhauling the call management system Microsoft used to make 70 million calls per year has been a massive undertaking.

The highly complex system was 20 years old and difficult to move on from when, five years ago, the company decided a transformation was needed.

These phone calls are how Microsoft talks to its customers and its partners. We needed to get this right because our call management system is one of the company’s biggest front doors.

– Matt Hayes, principal program manager, OneVoice team

Not only did Microsoft install an entirely new call management system (which is now fully deployed), it did so on next-generation Microsoft Azure infrastructure with global standardization, new capabilities, and enhanced integration for sales and support.

“These phone calls are how Microsoft talks to its customers and its partners,” says Matt Hayes, principal program manager of the OneVoice team. “We needed to get this right because our call management system is one of the company’s biggest front doors.”

Looking back, it was a tall order for Hayes and the OneVoice team, the group in charge of the upgrade at Microsoft Digital, the engineering organization at Microsoft that builds and manages the products, processes, and services that Microsoft runs on.

What made it so tough?

The call management system was made up of 170 different interactive voice response (IVR) systems, which were supported by more than 20 separate phone systems. Those phone systems consisted of 1,600 different phone numbers that were dispersed across 160 countries and regions.

Worst of all, each of these systems was working in isolation.

[This is the second in a series on Microsoft’s call center transformation. The first story in the series documents how Microsoft moved its call centers to Microsoft Azure.]

Kickstarting a transformation

The OneVoice team kicked off Microsoft’s bid to remake its call management system with a complex year-long request for proposal (RFP) process. The team also began preparations with the internal and external stakeholders that it would partner with throughout the upgrade.

To help manage all these workstreams, projects were divvied up into categories that each had their own dedicated team and mandate:

Architecture: This team considered network design and interoperability with the cloud.

Feature needs: This group was charged with ensuring the new system would support business requirements and monitoring needs. They were also tasked with calling out enhancements that should be made to the customer experience.

Partner ecosystem: This team made sure the needs of partners and third-party players were considered and integrated.

Add-on investments: This group made sure cloud space needs were met, addressed personnel gaps, and pursued forward-looking opportunities.

These initial workstreams became the pillars used to guide the transformation of the call management system.

Graphic illustrates the four pillars that drove the OneVoice Call Center’s migration process: 1) Architectural considerations 2) Feature needs 3) Partner ecosystem 4) Add-on investments
Four pillars of transformation drove the OneVoice team’s call center migration process.

The key to the upgrade was the synergy between the OneVoice team and the call center teams scattered across the company, says Daniel Bauer, senior program manager on the OneVoice team.

“We decided we were going to move to the cloud—after that was approved, we knew it was time to bring in our call center partners and their business representatives,” Bauer says. “That collaboration helped us build a successful solution.”

Early input from these partners guided the architectural design. This enabled the team to bake in features like end-to-end visibility of metrics and telemetry into both first and third-party stacks. It allowed them to manage interconnected voice and data environments across 80 locations. Importantly, it also set early expectations with telecom service providers around who would own specific functions.

Designing for scale by starting small

Bringing myriad systems together under one centralized roof meant the team had to build a system that could handle exponentially greater amounts of data and functionality.

This required a powerful cloud platform that could manage the IVR technology and a call routing system that would appropriately direct millions of calls to the right agent among more than 25,000 customer service representatives.

“Just the scope of that was pretty intense,” says Jon Hoyer, a principal service engineer who led the migration for the OneVoice team.

The strategy, he says, was to take a regional line of business approach. The OneVoice team started the migration in a pilot with a small segment of Microsoft Xbox agents. After the pilot proved successful, the process was scaled out region by region, and in some cases, language by language within those regions.

“There was a lot of coordination around the migration of IVR platforms and call routing logic while keeping it seamless for the customer,” Hoyer says.

Ian McDonnell, a principal PM manager who led the partner onboarding for the OneVoice team, was also faced with the extremely large task of moving all the customer service agents to the new platform.

For many of these partners, this was a wholesale overhaul that involved training tens of thousands of agents and managers on the new cloud solution.

“We were replacing systems at our outsourcers that were integral to how they operated—integral to not only how they could bill their clients, but enabled them to even pay their salaries,” McDonnell says. “We had to negotiate to make sure they were truly bought in, that they not only saw the shared return on investment, but also recognized the new agility and capabilities this platform would bring.”

Build and deploy once, impact everywhere

When a change is made to a system, no one wants to have to make that change again and again.

When we had 20 separate disconnected systems at our outsourcers, it was an absolute nightmare to make that work everywhere. Now we can build it once and deploy that experience across the whole world.

– Ian McDonnell, principal PM manager, OneVoice team

One of the biggest operational efficiencies of the new centralized system is the ability to build new features with universal deployments. If the hold music or a holiday message needs to be changed, rather than updating it on an individual basis to every different phone system, that update goes out to all suppliers at once.

“When we had 20 separate disconnected systems at our outsourcers, it was an absolute nightmare to make that work everywhere,” McDonnell says. “Now we can build it once and deploy that experience across the whole world.”

Previously, there was no option to redirect customers from high- to low-volume call queues, leaving the customer with long waits and negatively impacting their experience. Now, with a single queue, customers are routed to the next available and most appropriate customer service agent in the shortest time, whether the agents sit in the US, India, or the Philippines, providing additional resilience to the service.

This cloud native architecture allowed for new omnichannel features such as “click-to-call,” where customers who are online can request a callback. This allows seamless continuity of context from the secured online experience to a phone conversation for deeper engagement.

As the OneVoice team explores what’s next in add-on investments, they’re exploring a wide range of technologies and capabilities to modernize the call center environment. One of the primary areas of focus is leveraging the speech analytics technology of Microsoft Azure Cognitive Services, which can provide deeper insights into customer satisfaction and sentiment.

In an upcoming blog post in this series, the OneVoice team will share how an in-house development leveraging Microsoft Azure Cognitive Services allowed the team to revolutionize customer sentiment tracking and identify issues before they become major problems.

To contact the OneVoice team, and learn more about their customer support cloud journey, email them at onevoice@microsoft.com.

Related links

The post Streamlining Microsoft’s global customer call center system with Microsoft Azure appeared first on Inside Track Blog.

]]>
6183
How Microsoft is modernizing its internal network using automation http://approjects.co.za/?big=insidetrack/blog/how-microsoft-is-modernizing-its-internal-network-using-automation/ Wed, 11 Dec 2019 23:20:08 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=5033 After Microsoft moved its workload of 60,000 on-premises servers to Microsoft Azure, employees could set up systems and virtual machines (VMs) with a push of a few buttons. Although network hardware servers have changed over time, the way that network engineers work isn’t nearly as modern. “With computers, we have modernized our processes to follow […]

The post How Microsoft is modernizing its internal network using automation appeared first on Inside Track Blog.

]]>
Microsoft Digital storiesAfter Microsoft moved its workload of 60,000 on-premises servers to Microsoft Azure, employees could set up systems and virtual machines (VMs) with a push of a few buttons.

Although network hardware servers have changed over time, the way that network engineers work isn’t nearly as modern.

“With computers, we have modernized our processes to follow DevOps processes,” says Bart Dworak, a software engineering manager on the Network Automation Delivery Team in Microsoft Digital. “For the most part, those processes did not exist with networking.”

Two years ago, Dworak says, network engineers still created and ran command-line-based scripts and created configuration change reports.

“We would sign into network devices and submit changes using the command line,” Dworak says. “In other, more modern systems, the cloud provides desired-state configurations. We should be able to do the same thing with networks.”

It became clear that Microsoft needed modern technology for configuring and managing the network, especially as the number of managed network devices increased on Microsoft’s corporate network. This increase occurred because of higher network utilization by users, applications, and devices as well as more complex configurations.

“When I started at Microsoft in 2015, our network supported 13,000 managed devices,” Dworak says. “Now, we surpassed 17,000. We’re adding more devices because our users want more bandwidth as they move to the cloud so they can do more things on the network.”

[Learn how Microsoft is using Azure ExpressRoute hybrid technology to secure the company.]

Dworak and the Network Automation Delivery Team saw an opportunity to fill a gap in the company’s legacy network-management toolkit. They decided to apply the concept of infrastructure as code to the domain of networking.

“Network as code provides a means to automate network device configuration and transform our culture,” says Steve Kern, a Microsoft Digital senior program manager and leader of the Network Automation Delivery Team.

The members of the Network Automation Delivery Team knew that implementing the concept of network as code would take time, but they had a clear vision.

“If you’ve worked in a networking organization, change can seem like your enemy,” Kern says. “We wanted to make sure changes were controlled and we had a routine, peer-reviewed rhythm of business that accounted for the changes that were pushed out to devices.”

The team has applied the concept of network as code to automate processes like changing the credentials on more than 17,000 devices at Microsoft, which now occurs in days rather than weeks. The team is also looking into regular telemetry data streaming, which would inform asset and configuration management.

“We want network devices to stream data to us, rather than us collecting data from them,” Dworak says. “That way, we can gain a better understanding of our network with a higher granularity than what is available today.”

The Network Automation Delivery Team has been working on the automation process since 2017. To do this, the team members built a Git repository and started with simple automation to gain momentum. Then, they identified other opportunities to apply the concept of GitOps—a set of practices for deployment, management, and monitoring—to deliver network services to Microsoft employees.

Implementing network as code has led to an estimated savings of 15 years of labor and vendor spending on deployments and network devices changes. As network technology shifts, so does the role of network engineers.

“We’re freeing up network engineers so they can build better, faster, and more reliable networks,” Kern says. “Our aspiration is that network engineers will become network developers who write the code. Many of them are doing that already.”

Additionally, the team is automating how it troubleshoots and responds to outages. If the company’s network event system detects that a wireless access point (AP) is down, it will automatically conduct diagnostics and attempt to address the AP network outage.

“The building AP is restored to service in less time than it would take to wake up a network engineer in the middle of the night, sign in, and troubleshoot and remediate the problem,” Kern says.

Network as code also applies a DevOps mentality to network domain by applying software development and business operations practices to iterate quickly.

“We wanted to bring DevOps principles from the industry and ensure that development and operations teams were one and the same,” Kern says. “If you build something, you own it.”

In the future, the network team hopes to create interfaces for each piece of network gear and have application developers interact with the API during the build process. This would enable the team to run consistent deployments and configurations by restoring a network device entirely from a source-code repository.

Dworak believes that network as code will enable transformation to occur across the company.

“Digital transformation is like remodeling a house. You can remodel your kitchen, living room, and other parts of your house, but first you have to have a solid foundation,” he says. “Your network is part of the foundation—transforming networking will allow others to transform faster.”

Related links

The post How Microsoft is modernizing its internal network using automation appeared first on Inside Track Blog.

]]>
5033