Our modern architecture model defines 16 unique vertical services, each with its associated UI, API, and data layers. It also provides an overarching user experience layer that developers can use to connect any combination of the services into a seamless, end-to-end user experience.<\/figcaption><\/figure>\nAnother critical aspect of providing an end-to-end experience is automating the front- and back-office operations (such as support) as much as possible. To support this automation, our architecture incorporates a Procure-to-Pay Support layer underneath all the services. Developers can integrate support bots into their Procure-to-Pay services to monitor user activity and proactively offer guidance when deemed appropriate. Moreover, if the support bot can\u2019t quickly resolve the issue, it will silently escalate to a human supervisor who can interact with the user within the same support window. Our objective is to make the support experience so seamless that users don\u2019t recognize when they are interacting with a bot vs. a support engineer.<\/p>\n
All these connections and data referencing are hidden from the user, resulting in a seamless experience that can be expressed as a portal, a mobile app, or even as a bot.<\/p>\n
Consolidating data to support end-to-end experiences<\/h3>\n One ongoing challenge that we experienced with our siloed on-premises apps was how each app utilized its own copy of data, resulting in wasted storage space and inconsistent analytics due to the variances between data sets. In contrast, the new architectural data model had to align with our principle of maintaining single, master copies of data that any service could reference. This required forming a new Finance data lake to store all the data.<\/p>\n
The decision to create a data lake required a completely new mindset. We decided to shift away from the traditional approach, in which we needed to understand the nature of each data element and how it would be implemented in a solution. Today, our strategy is to place all<\/em> data into a single repository where it can be available for any potential use\u2014even when the data has no current apparent utility. This approach recognizes the inherent value of data without having to map each data piece to an individual customer\u2019s requirements. Moreover, having a large pool of readily available, certified data was precisely what we needed to support our machine learning (ML) and AI-based discovery and experimentation\u2014processes that require large amounts of quality data that had been unavailable in the old siloed systems.<\/p>\nAfter we formed the Finance data lake, we defined a layer in our architecture to support different types of data access:<\/p>\n
\nHot access<\/strong> is provided through the API layer (described later in this case study) for transactional and other situations that require near real-time access to data.<\/li>\nCold\/warm access<\/strong> is used for archival data that is one hour old or older, such as for machine learning or running analytics reports. This is a hybrid model, where we can access data that is as close to live status as possible without accessing the transaction table, but also perform analytics on top of the most recent cold data.<\/li>\n<\/ul>\nBy offering these different types of access, our new architectural model streamlines how people can connect data sources from different places and for different use scenarios.<\/p>\n
Designing enterprise services in an API economy<\/h3>\n In the older on-premises apps, the tight coupling of UI and functionality forced users to go through each app\u2019s UI just to access the data. This type of design provided a very poor and disjointed user experience because people had to navigate many different tools with different interfaces to complete their Procure-to-Pay task.<\/p>\n
One of the most significant changes that we made to business functionality in our new architectural model was to completely decouple business functionality from UI. As Figure 1 illustrates, our new architectural model has clearly defined layers that place all business functionality in a service\u2019s API layer. This core functionality is further broken down into very small services that perform specific and unique functions; we call these microservices. <\/em><\/p>\nWith this approach, any microservice within one service can be called by other services as required. For example, a link-validation microservice can be used to verify employee, partner, or supplier banking details. We also recognized the importance of making these microservices easily discoverable, so we took an open-source approach and published details on Swagger about each microservice. Internal developers can search for internal APIs for reuse, and external developers can search for public APIs.<\/p>\n
As an example, the below image illustrates the usage scenario for buying a laptop, where the requester works through the unified User Experience layer. What is hidden to the user is how multiple services including Catalog Management, Purchase Experience, and Purchase Order interact as needed to pass data and hand off the user transparently from service to service to complete the Procure-to-Pay task.<\/p>\nAn example usage scenario for buying a laptop, illustrating how the person requesting a new computer works through the unified End-to-End User Experience layer while multiple services work transparently in the background to complete the end-to-end Procure-to-Pay task.<\/figcaption><\/figure>\nWhen defining our modern architecture, we wanted to minimize the risk that an update to microservice code might impact end-to-end service functionality. To achieve this, we defined service contracts that map to each API, and how the data interfaces with that API. In other words, all business functionality within the service must conform to the contract\u2019s terms. This allows developers to stub a service with representative behaviors and payloads that other teams can consume while the service code is being updated. Provided the updates are compliant with the contract, the changes to the code won\u2019t break the service.<\/p>\n
Finally, our new cloud-based modern architecture gave us an opportunity to improve the user experience by specifying a single sign-on (SSO) event throughout the day, irrespective of how many services a user touches during that time. The key to supporting SSO was to leverage the authentication and authorization processes and protocols that are built into Microsoft Azure Active Directory.<\/p>\n
Benefits<\/h2>\n Following are some of the key benefits that our Microsoft Digital team is experiencing by building our Procure-to-Pay service on our modern cloud-based architecture.<\/p>\n
\nVastly improved user experience.<\/strong> The new Procure-to-Pay service has streamlined the procurement and payment process, providing a single, end-to-end user experience with a single sign-on event that replaces 36 legacy apps and automates many steps that used to require manual input. In internal surveys, employees are reporting a significant improvement in satisfaction scores across the enterprise: users are happier working with the new service, engineers can more easily troubleshoot issues, and feature updates can be implemented in days instead of months.<\/li>\nBetter compliance.<\/strong> We now have full governance over how our data is being accessed and distributed. The shift to a single Finance data lake with single copies of certified master data and clear ownership of that data, ensures that all processes are accessing the highest-quality data\u2014and that the people accessing that data are authorized to do so.<\/li>\nBetter insights.<\/strong> Now that our KPIs are all based on the certified master data, we\u2019ve improved our analytics accuracy by ensuring that all analysis is based on the same master data sets. This in turn enables us to ask the big questions of our collective data, to gain insights and help the business make appropriate data-driven decisions.<\/li>\nOn-demand scaling.<\/strong> The natural rhythm of Finance operations imposes high demand during quarterly and annual report periods, while requiring fewer resources at other times. Because our architecture is based in the cloud, we utilize Microsoft Azure\u2019s native ability to dynamically scale up to support peaks in processing and throttle processing resources when demand is low.<\/li>\nSignificant cost and resource savings. <\/strong>Building our new Procure-to-Pay service on a modern, cloud-based architecture is resulting in cost and resource savings through the following mechanisms:\n\nDecommissioned physical on-premises servers:<\/strong> We\u2019ve decommissioned the expensive, high-end physical and virtual servers that used to run the 36 on-premises apps and replaced them with our cloud-based Procure-to-Pay service. This has reduced our on-premises virtual machine footprint by 80 percent.<\/li>\nReduced code maintenance costs:<\/strong> In addition to decommissioning the on-premises apps\u2019 servers, we no longer need to spend significant development time maintaining all the brittle custom code in the old siloed apps.<\/li>\nDrastic reduction of compute charges: <\/strong>Our cloud-based Procure-to-Pay service has several UIs that can be parked and stored very cost effectively as BLOBs until the UIs are needed. This completely avoids any compute-based charges until a UI is required and is then launched on demand.<\/li>\nReduction in support demand:<\/strong> Our bot-driven self-serve model automatically resolves many of our users\u2019 basic support issues, freeing up our support engineers to focus on more critical issues. We estimate a 20 percent reduction in run cost by decommissioning our Level 3 support line, and a 40 percent reduction in overall Procure-to-Pay related support tickets.<\/li>\nBetter utilization of computing resources: <\/strong>Our old on-premises apps incurred huge capital expenditure costs when purchasing their high-end hardware and licenses for servers such as Microsoft SQL Server. With a planning and implementation period that might take months, machines were typically overbuilt and underutilized because we would plan for an approximate 10 times capacity to account for growth. Later, the excess capacity wouldn\u2019t be sufficient, and we would have to repeat this process to purchase newer hardware with even greater capacity. The new architecture has eliminated capital expenditures for Procure-to-Pay, favoring the more efficient, scalable, and cost-effective Microsoft Azure cloud environment. We\u2019re also utilizing our data storage more efficiently. It is less costly to store data in the cloud, and storing a single master copy of data in our Finance data lake removes all the separate copies of the same data that each legacy app would maintain.<\/li>\nBetter allocation of personnel: <\/strong>Previously, our Engineering team had to review the back-end systems and build queries to cater to each team\u2019s needs. Consolidating all data to the Finance data lake in our new system enables people to create their own Microsoft Power BI reports on top of the data, modify their analyses to form new questions, and derive insights that might not have appeared otherwise. As a result, our engineering resources can be reallocated to support more strategic functions.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\nSimplified testing and maintenance. <\/strong>We use Microsoft Azure\u2019s out-of-the-box synthetics to test each function within our microservices programmatically, which is a much easier and more comprehensive approach than physically testing each monolithic app in a reactive state to assess its health. Similarly, Azure\u2019s service clusters greatly streamline our maintenance efforts, because we can deploy many instances of different services to achieve a higher density. Moreover, we now utilize a single cluster for all our preproduction environments. We no longer need to maintain separate development, system test, staging, and production environments.<\/li>\n<\/ul>\n \nWe on the Microsoft Digital team learned some valuable best practices as we designed our modern cloud-based architecture:<\/p>\n
\nAchieving a modern architecture starts with asking the big questions:<\/strong> Making the shift from large, unwieldly standalone on-premises apps to a modern, cloud-based services architecture requires some up-front planning. Assemble the appropriate group of stakeholders and gain consensus on the following questions: What type of architecture do we want? Where do we want to have global access to resources? What types of data should be stored locally, and under what circumstances? When and how do we programmatically access data that we don\u2019t own to mitigate, minimize, or entirely remove data duplication? How can we ensure what we\u2019re building is the most efficient and cost-effective solution?<\/li>\nIdentify where your on-premises apps are in their lifecycle when deciding whether to \u201clift-and-shift\u201d:<\/strong> If you\u2019re dealing with an app or service that is nearing its sunset phase and you only need to place it into the cloud for a short period while you transition to something newer, consider the \u201clift-and-shift\u201d approach where your primary objective is to run the exact same system in the cloud. For systems that are expected to have a longer lifecycle, you\u2019ll reap greater rewards by rethinking your service architecture with a platform as a service (PaaS) mindset from the start.<\/li>\nDesign your architecture for engineering rigor and agility.<\/strong> Look for longer-term value based on strategic planning to make the most of your transition to the cloud. At Microsoft, this was the key determination that guided our new architecture\u2019s development: Reimagine how our core processes can be run when they\u2019re built on a modern service architecture.<\/em> For us, this included being mobile first and cloud first, and shifting from waterfall designs to adopting agile practices. It also entailed making security a first thought in architectural design instead of a last thought, and designing the continuous integration\/continuous deployment (CI\/CD) pipeline.<\/li>\nKeep cost efficiency in mind<\/strong>. From the very first line of code, everyone involved in developing your new services should strive to make each component as efficient and cost effective as possible. At Microsoft, this development principle is why we mandated a serverless compute model with no static environments that supported \u201cparking\u201d inactive code or UI inside BLOBs when they weren\u2019t needed. This efficiency is also a key reasoning behind our adopting Microsoft Azure resource groups to minimize the effort required to switch between stage and production environments.<\/li>\nPut everything in into your data lake. <\/strong>Cloud-based storage is inexpensive. When organizations look to the cloud as their primary storage solution, they no longer need to expend effort collecting only the data that they think everyone wants\u2014especially because, in reality, everyone wants something different. At Microsoft, by creating the Finance data lake and shifting our mindset to store all<\/em> master data there, irrespective of its anticipated use, we eliminated the resources we would traditionally spend to analyze each team\u2019s data requirements. Today, we focus on identifying data owners and certifying the data. We can then address the data of interest when a customer makes a specific request.<\/li>\nIncorporate telemetry into your architecture to derive better insights from your data.<\/strong> Your data-driven decisions are only as good as your data. In our old procurement and payment system at Microsoft, we didn\u2019t know who was using the old data and for what reasons, or even how much it was costing us. With the new Procure-to-Pay service based on our modern architecture, we have telemetry capabilities inside everything we build. This helps with service health monitoring. We also incorporate this information into our feature and service decision-making processes as we continually improve Procure-to-Pay.<\/li>\nPromote your new architectural model to gain adoption<\/strong>. You can define a new architectural design, but if you don\u2019t promote it in a way that demonstrates its value, developers will hesitate to use it. At Microsoft, we published details about how developers could tap into this new architecture to create more intuitive and user-friendly end-to-end experiences that catered to their end users. This internal open-source approach creates a collaborative environment that encourages developers to join in and access the data they need, and then apply it to their own end-to-end user experience wrapper.<\/li>\n<\/ul>\nAt Microsoft, rethinking our approach to services with this cloud-based modern architecture is helping us become a data-driven organization. By consolidating our data into a single data lake and providing an API layer that enables rapid development of end-to-end procurement and payment services, we\u2019ve created a self-serve platform where anyone can consume the certified data and present it in a seamless, end-to-end manner to the user, who can then derive insights and make data-driven decisions.<\/p>\n
Our next steps<\/h2>\n The Procure-to-Pay service is just one cloud-based service that we built on top of our modern architecture. We\u2019re continuing to mature this service, but we\u2019re also exploring additional end-to-end services that can benefit other Finance processes to the same extent that Procure-to-Pay has modernized procurement and payment.<\/p>\n
This new model doesn\u2019t have to be restricted to Finance; our approach has the potential to benefit the entire company. The guiding principles we followed to define our Finance architecture align closely with our leadership\u2019s digital transformation vision. That is why we\u2019re also discussing how we might help other departments outside Finance adopt the same architectural model, build their own end-to-end user experiences, and reap similar rewards.<\/p>\n
<\/p>\n
\nLearn how DevOps is sending engineering practices up in smoke<\/a>.<\/li>\nGet more Microsoft Azure architecture guidance from us.<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"The digital transformation that many enterprises are undertaking has its benefits and its challenges: while it brings new opportunities that add value to customers and help drive business, it also places demands on legacy infrastructure, making companies struggle to keep pace with the digital world\u2019s ever-increasing speed of business. Consider an enterprise\u2019s line-of-business (LOB) systems, […]<\/p>\n","protected":false},"author":133,"featured_media":9817,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"_hide_featured_on_single":false,"_show_featured_caption_on_single":true,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[1],"tags":[328,223,115],"coauthors":[646],"class_list":["post-9770","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-azure-and-cloud-infrastructure","tag-cloud-migration","tag-microsoft-azure","program-microsoft-digital-technical-stories","m-blog-post"],"jetpack_publicize_connections":[],"yoast_head":"\n
Designing a modern service architecture for the cloud - Inside Track Blog<\/title>\n \n \n \n \n \n \n \n \n \n \n \n \n\t \n\t \n\t \n \n \n \n\t \n\t \n\t \n