{"id":5257,"date":"2024-05-28T10:05:04","date_gmt":"2024-05-28T17:05:04","guid":{"rendered":"https:\/\/www.microsoft.com\/insidetrack\/blog\/?p=5257"},"modified":"2024-05-30T13:06:36","modified_gmt":"2024-05-30T20:06:36","slug":"how-microsoft-modernized-its-purchase-order-system-with-azure-microservices","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/insidetrack\/blog\/how-microsoft-modernized-its-purchase-order-system-with-azure-microservices\/","title":{"rendered":"How Microsoft modernized its purchase order system with Azure microservices"},"content":{"rendered":"

\"Microsoft[Editor\u2019s note: This content was written to highlight a particular event or moment in time. Although that moment has passed, we\u2019re republishing it here so you can see what our thinking and experience was like at the time.]<\/em><\/p>\n

MyOrder, an internal Microsoft legacy application, processes roughly 220,000 purchase orders (POs) every year, which represent $45 billion in internal spending at Microsoft. Until recently, MyOrder was a massive, monolithic, on-premises application. It was costly to maintain, difficult to update, and couldn\u2019t be accessed without a Microsoft-authorized VPN.<\/p>\n

MyOrder struggled every May, when traffic could double\u2014or even triple\u2014from 1,000 purchase orders per day to 3,000. When users submitted purchase orders through the ASP.net-based website during these high-load periods, they frequently saw response times as high as 30 seconds, if the application didn\u2019t outright crash or freeze.<\/p>\n

Even when it worked as intended, MyOrder\u2019s user experience could be frustrating.<\/p>\n

\u201cMyOrder was wizard-based, so users advanced through the app in a particular sequence,\u201d says Vijay Bandi, a software engineer on the MyOrder team in Microsoft Digital. \u201cIf you advanced to a point where you didn\u2019t have the information for a required field, you were stuck. It was an awful experience.\u201d<\/p>\n

Elsewhere at Microsoft, engineering teams are moving old, monolithic applications to the cloud for increased efficiency, scalability, and security\u2014not to mention vastly improved user experiences. With MyOrder showing its age, the MyOrder team decided it was time to follow suit.<\/p>\n

[Learn more on <\/em>Microsoft\u2019s modern engineering transformation<\/em><\/a>,\u00a0<\/em>how it\u2019s embracing a cloud-centric architecture<\/em><\/a>, and <\/em>how it\u2019s designing a modern service architecture for the cloud<\/em><\/a>.]<\/em>\u00a0 <\/em><\/p>\n

\"Screenshots
The MyOrder team (shown in this collage of Microsoft Teams screenshots) is practicing social distancing, working from home, and communicating exclusively online.<\/figcaption><\/figure>\n

From server-based monolith to agile PaaS<\/strong><\/p>\n

MyOrder\u2014which combined a front end, back end, and all related logic in one solution\u2014was only half of the ancient, monolithic applications that comprised the legacy purchase order system. The other half was the Procurement Services Platform (PSP), a huge middleware services layer. PSP was comprised of about 60 smaller projects and 500 validation paths.<\/p>\n

Built on top of PSP, MyOrder collected data from PSP and housed it in one of the 35 servers required to run the application. It was hosted in four separate virtual machines to support the load. The engineering team used a load balancer to distribute the load to each of the VMs. Caches were built into the servers, but because the caches were distributed among four different VMs, they were never in sync.<\/p>\n

\u201cSuppose a user creates a purchase order pointing to one server, and the request goes to the next server,\u201d says Atanu Sarkar, also a software engineer on the Microsoft Digital MyOrder team. \u201cIn that case, the user could search for a PO but not find it if the cache isn\u2019t updated.\u201d<\/p>\n

Fewer resources, greater flexibility with Azure<\/strong><\/p>\n

According to MyOrder Engineering Manager Rajesh Vasan, the team considered several platforms for the new solution before landing on Microsoft Azure.<\/p>\n

\u201cWe looked at a standalone, private cloud instance of Service Fabric and at Azure App Service,\u201d Vasan says. \u201cAzure was expanding, though. They were investing a lot of time in PaaS (platform as a service) offerings, which meant that we could offload all the networking, configurations, and deployments to Azure, and just concentrate on the application code.\u201d<\/p>\n

That would be a welcome change compared to the old monolith.<\/p>\n

\u201cA change to a single line of code used to take so much time, because you needed to build the whole solution from scratch with thousands of lines of code,\u201d Vasan says. \u201cDebugging presented similar challenges.\u201d<\/p>\n

The legacy app also supported external services like SAP (Microsoft\u2019s financial system of record) and Microsoft Approvals, plus some third-party integrations.<\/p>\n

\u201cAll that functionality, all those integrations in one monolith, that was a problem,\u201d Vasan says.<\/p>\n

By moving to Azure, they could convert each individual function and integration into a single microservice.<\/p>\n

\u201cLet\u2019s say I want to change the tax code for a specific country,\u201d Vasan says. \u201cIn Azure, I know there\u2019s one microservice that does tax code validation. I go there, I change the code, I deploy. That\u2019s it. It\u2019ll hardly take a week.\u201d<\/p>\n

The same scenario in the old software, he says, would take a couple of months.<\/p>\n

Migrating databases without downtime<\/strong><\/p>\n

Creating that experience required careful consideration as to how the team would maintain the legacy app while building the new one and migrating from one to the other.<\/p>\n

\u201cThe first step was building a single source of truth,\u201d Vasan says. \u201cWe wanted to put all that data in the cloud so we had a single source for all transactional purchase order data.\u201d<\/p>\n

After the team moved the data onto Azure, they built connectors for existing and new components.<\/p>\n

\u201cBoth the legacy service, which was an Internet Information Services (IIS) web service, and the new service, which would be Azure API components and serverless components acting as individual microservices, would connect to a single source of truth,\u201d Vasan says. \u201cThat was the first step.\u201d<\/p>\n

The team then needed to decide which microservices to build and which to start building first.<\/p>\n

\u201cIt gets tricky here,\u201d Vasan says. \u201cSome users were accessing data from the old app, so we had to sync back onto the old one as well, up to the point that all users were no longer using the legacy service.\u201d<\/p>\n

The team built APIs to access data and key microservices such as search and the user interface (which they completely remodeled using Angular). Next, they focused on building microservices that were directly related to purchase order processing.<\/p>\n

After the team built the core microservices, they started moving tenants to the new infrastructure. By this point, they had eliminated PSP and its database entirely.<\/p>\n

\u201cThat was a big milestone for us because while we were migrating tenants, we were also working to move everything to the new database,\u201d Vasan says.<\/p>\n

At that point, there was no duplicate data.<\/p>\n

\u201cWe had our single source of truth,\u201d Vasan says. \u201cThe entire PO processing pipeline was in the cloud.\u201d<\/p>\n

The team then began one of the more challenging aspects of the project: they released one of the microservices with A\/B testing in place.<\/p>\n

\u201cOne of our microservices would call the other microservices and the old PSP in parallel,\u201d Vasan says. \u201cAfter the call went through both, we compared the results to make sure they were consistent. We flighted this in the production environment until we found and fixed all the issues. Then we went live.\u201d<\/p>\n

The next step was designing administration and configuration.<\/p>\n

\u201cWe completely rewrote all that into the new areas, plus another eight or nine microservices,\u201d Vasan says.<\/p>\n

By then, MyOrder was 100 percent Azure, with no legacy components at all.<\/p>\n

 <\/p>\n

\"Graphic
The purchase order solution has redesigned the monolithic platform into microservices, Azure functions and native cloud services.<\/figcaption><\/figure>\n

The benefits of microservices<\/strong><\/p>\n

The MyOrder team leaned on several Azure offerings to create the new infrastructure, including Azure Data Factory, Azure Cache for Redis, Azure Cognitive Search, and Azure Key Vault. The new, modernized version of MyOrder consists of 29 Azure microservices that are \u201cloosely coupled and follow the separation of concern principle,\u201d Vasan says.<\/p>\n

Like the POE (Proof of Execution Procedure) service for PDS (Procurement Data Source) migration example, the microservices architecture made modifying existing capabilities and adding new ones relatively easy. Because it\u2019s built on Azure, it\u2019s highly scalable, so adding new tenants is much simpler.<\/p>\n

The team is most thankful, though, for the ease with which they can maintain compliance. Because all code was housed within a single, monolithic application prior to the migration, and because some services within that monolith were financial in nature, the entire application was, in effect, subject to the requirements of the Sarbanes-Oxley (SOX) act.<\/p>\n

\u201cWith a monolith,\u201d Vasan says, \u201cthe moment you deploy code to a server, the entire server has to be SOX compliant.\u201d<\/p>\n

Because the team migrated the system to Azure microservices, microservices that are financial in nature are now separated from those that aren\u2019t.<\/p>\n

\u201cWith monoliths, every change is a SOX change, so it has to go through multiple approvals before it can be deployed,\u201d Vasan says.<\/p>\n

Using microservices \u201cmeans leaner, shorter audits because the audits only apply to the SOX components, not the entire platform,\u201d he says.<\/p>\n

Of the 29 new microservices, eight require SOX compliance, and 20 don\u2019t.<\/p>\n

\u201cWe used to have SOX issues. Now we don\u2019t. We\u2019re more compliant and audit-friendly because of moving to Azure,\u201d Vasan says.<\/p>\n

SOX requirements also led to performance issues.<\/p>\n

\u201cMaintaining SOX compliance requires to adhere strict approval and release process including any backend updates to data as well,\u201d MyOrder software engineer Umesh says.<\/p>\n

Building for the future<\/strong><\/p>\n

One of the tenants the team migrated is Microsoft Real Estate and Security (RE&S), which is responsible for the construction of new datacenters and office buildings at Microsoft. RE&S purchase orders can represent hundreds of millions of dollars in costs. Now that those POs go through the modern MyOrder infrastructure, RE&S has reduced costs by $1.75 million per year, thanks to retiring many now unnecessary servers and reduced operational costs.<\/p>\n

Next, the team is focusing on moving MyOrder data into a data lake.<\/p>\n

\u201cThere\u2019s an overall investment in the Microsoft organization around data lakes right now,\u201d Vasan says. \u201cAzure has a data lake offering, of course, and we\u2019re creating this single source of truth that people are using to build insights around POs. If you want to create a purchase order automatically through an API, for example, you can do that now.\u201d<\/p>\n

\u201cThere is a fantastic opportunity to optimize and incorporate intelligence in the system leveraging machine learning and it has been kicked off with integration of category classification model i.e., software model\u201d MyOrder software engineer Dewraj says.<\/p>\n

\u201cBesides there are active conversation and efforts are being made to also leverage ML (machine learning) to optimize the compliance checks for enhancing the GDPR (General Data Protection Regulation) compliance.\u201d<\/p>\n

\u201cMoving away from batch processing to real time APIs to reduce the onboarding time and Purchase Order Turn Around Time. For example, PET (Planning Execution Tracker) and PDS retirement (POE, PCC, and Account Code) data is exposed through real time APIs.\u201d<\/p>\n

\u201cv-Payments will help business users to procure small purchases without going through supplier onboarding process and would require minimal approval and validation. The user would have the flexibility to purchase from any AMEX supplier using v-Payment credit cards.\u201d<\/p>\n

Those capabilities are a far cry from those of the massive, monolithic legacy system that the reborn MyOrder has replaced.<\/p>\n

\"Related<\/p>\n