Inside Track – retired stories http://approjects.co.za/?big=insidetrack/blog/author/insidetrackarchive/ How Microsoft does IT Mon, 08 Jan 2024 22:17:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 137088546 Creating a modern data governance strategy to accelerate digital transformation at Microsoft http://approjects.co.za/?big=insidetrack/blog/driving-effective-data-governance-for-improved-quality-and-analytics/ Wed, 03 May 2023 22:05:47 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=8667 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. Data is the new currency of digital transformation. Whether it’s providing new insights, improving decision making, or driving better business outcomes, enthusiasm for unlocking the power of data […]

The post Creating a modern data governance strategy to accelerate digital transformation at Microsoft appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital technical storiesData is the new currency of digital transformation. Whether it’s providing new insights, improving decision making, or driving better business outcomes, enthusiasm for unlocking the power of data has never been greater. Internally at Microsoft, our data governance practices are essential in helping ensure that data at Microsoft is optimized for any use—enabling deeper insights across our organizational and functional boundaries.

In the simplest terms, data governance is about managing data as a strategic asset. It involves ensuring that there are controls in place around data, its content, structure, use, and safety. To provide effective data governance, we need to know what data exists, whether the data is of good quality, whether the data is usable, who’s accessing it, who’s using it, what are they using it for, and whether the use cases are secure, compliant, and governed.

As modern business is embracing advanced analytics, artificial intelligence, and machine learning, the amount, velocity, and variety of data is increasing. With all that data comes a wealth of new possibilities, and a new set of challenges. Our ability to optimize the management and governance of ever-greater amounts of data is essential.

Different data types require different controls to ensure that systems handle, store, and use the data correctly. The traditional top-down method Microsoft Digital Employee Experience (MDEE) was using for data governance wasn’t scalable. It left us little time to more than reactively address data issues as they occurred. We needed a scalable approach that could use automated controls, engineered into the process, to address the root causes of data issues during every stage of the data lifecycle.

Our approach to data governance

Rather than viewing data governance as a blocking function, or a gatekeeper in the enterprise, MDEE saw data governance modernization as way to democratize data responsibly. Widely accessible, trusted, and connected enterprise data makes intelligent experiences possible, and powers the wider digital transformation at Microsoft.

We are transforming how we provide data governance, to introduce scalable, automated controls for data architecture, lifecycle health, and advancing its appropriate use. As illustrated below, modern data governance is the foundational pillar upon which Microsoft has built its overall Enterprise Data Strategy.

Image illustrates how Data Governance is a foundational pillar of the overall data governance strategy.
Data governance is the foundational pillar of the Microsoft Enterprise Data Strategy.

We created our overall Enterprise Data Strategy in response to an increasing demand for the right intelligence to power experiences at every touchpoint inside and outside Microsoft. At the same time, the increased demand amplified the pressure to better govern the data and manage regulatory requirements across an ever-expanding data landscape. Trying to address data issues as they arose—one at a time—was expensive and inefficient. Without a centralized, scalable, and automated way to address the root causes of these data issues, our analytics capabilities would continue to decline. As would our user satisfaction rating for Microsoft’s data-centric apps.

We developed a more modern data governance strategy with five goals in mind:

  1. Reduce data duplication and sprawl by building a single Enterprise Data Lake (EDL) for high-quality, secure, and trusted data.
  2. Connect data from disparate silos in a way that creates opportunities to use that data in ways not possible in a siloed approach.
  3. Power responsible data democratization across Microsoft.
  4. Drive efficiency gains in the processes Microsoft employs to gather, manage, access, and use data.
  5. Meet or exceed compliance and regulatory requirements without compromising Microsoft’s ability to create exceptional products.

Our approach to modern data governance has two key components. First, we embed clear data standards and build them into our application development process. This move helps us automate and proactively manage data governance issues and data policy compliance. Second, we leverage the EDL platform, to centralize and systemically scan and monitor the data.

Illustration of how embedded data standards and enterprise data lake platform are two primary components in providing data governance.
The two-pronged approach that MDEE uses to modernize data governance.

Creating a clear set of data standards built into the engineering process

Much of our early effort focused on creating the formalized data standards that we wanted to build into the engineering process. It was natural for us to look to our core strength—engineering—when addressing business problems. For every formalized data standard, we then drive it into our modern engineering process. Having clear data standards and providing compliance measurements against those standards is key to our change management approach for data governance.

Microsoft Azure DevOps helps auto-generate and manage the data governance backlog

After authoring data standards, we then used Microsoft Azure DevOps (ADO)/Microsoft Visual Studio to automate the ways our systems generate, assign, and track data governance. For example, when an engineering project reaches a certain milestone, we have the application owner complete a data governance assessment. That assessment results in automatically generated work items in the project’s backlog.

Measuring our compliance against the data standards

To measure the progress of our data governance efforts, we are defining the metrics that matter to create Microsoft Power BI-based scorecards that explicitly show data standards alignment. For each standard, the central data governance office will actively monitor assessment exceptions, so that application owners can complete their required data governance work.

Centralizing data in the Enterprise Data Lake

As part of Microsoft’s Enterprise Data Strategy, we have been making key investments in the modern data foundations that enable modern data governance’s role in ensuring the responsible democratization of data. Centralizing data assets is key in reducing the amount of redundant and outdated copies, understanding who has access, and understanding how they are using the assets. Data governance optimizes our infrastructure resources and uses services and automation to proactively scan data for potential issues, rather than reacting to issues as they occur.

We have begun moving data from disparate sources across Microsoft into our Enterprise Data Lake (EDL). The EDL is built on Azure Data Lake Storage and leverages Azure Data Services. The EDL not only consolidates the data, it also creates a centralized source of truth where enterprise data can be collected, shaped into trusted forms, secured, made accessible, and managed by applicable governance controls. Moving everything to a single EDL enables scalable, systematic data scanning without having to individually scan thousands of databases across the enterprise.

Scalable and automated engineering solutions help proactively manage data governance

Microsoft integrates automated and scalable services into the EDL. These services help proactively automate data management, data quality management, data security, data access management, and compliance. This integration means various teams that are onboarding to the EDL don’t have to invest in engineering solutions to benefit from the built-in services and automation—they are applied consistently across all data.

Scanning for data issues in the Enterprise Data Lake

Regular scanning in the EDL finds data issues so they can be fixed and then prevented at the systems of record and systems of engagement. We are building out proactive solutions through engineering checks and guardrails directly into our processes. These moves help prevent data governance issues by design. The EDL’s capabilities and services include built-in scanning for data security, access management, compliance, and a host of other defined data controls. Not only does the data foundations team get notifications of compliance violations, the data publishers receive them as well.

The Enterprise Data Catalog improves discoverability

To provide effective data governance we need a full view of all data assets. We need to know where the assets exist, who is accessing them, and how users are interacting with the data. This visibility is needed for managing fragmentation, sprawl, and redundant or outdated copies of data assets that can exist across multiple platforms.

The Enterprise Data Catalog helps drive data governance. It does so by building controls into the catalog’s data-discovery process. These controls ensure that only people with the appropriate need and authority can access sensitive data stored in the EDL. This promotes compliance with government regulations through processes, patterns, and tools for data management and governance of data assets. The EDL metadata service sends metadata published to the EDL to the catalog for discovery. The service also registers broader data sources—transactional data systems, retention policies, and master data, for example—in the catalog.

Modern governance with assessment-based models and evidence-based results

At Microsoft, we find evidence-based flagging is the most compelling way to incent data producers and/or data owners to address the underlying gaps that cause data issues. Thus, “evidence at scale” is the fundamental reason we’ve modernized our data governance program around the two-pronged approach of embedded data standards coupled with a scannable EDL platform. Using this new approach, we can detect data issues before they metastasize and engage and drive data compliance with multiple organizations at once. We’re able to use scanners to show engineers where data compliance gaps exist before data products get published into production. And most importantly, we can sustain this model because it’s simply part of the everyday rhythm of the business.

Things to consider when planning your own data governance strategy

Though it’s early in our journey toward modern data governance, we do have a few best practices to share. Primarily, we recommend that you address your data governance strategy holistically. As illustrated below, we designed our approach so that standards, embedded into the engineering process and data centralization on the modern data foundation worked together to ensure end-to-end modern data governance.

  • Build standards into your existing process and implement them as engineering solutions. By approaching data governance during the design phase of the larger Enterprise Data strategy, we have been able to institutionalize “governance by design” into the engineering DNA—and apply it to data at every touchpoint. We are building our data governance controls into the centralized analytics infrastructure and analytics processes.
  • Consider implementing a modern data foundation with integrated toolsets. The EDL, with its built-in governance services and capabilities, does more than scale data governance efforts—it enables enterprise analytics for the whole organization. You can plan for federated analytics upfront by using a shared data catalog and data lake platform as your centralized analytics infrastructure. By centralizing data and bringing compute to the data rather than the other way around, you can reduce the amount of duplicated or fragmented data.
  • People and processes are just as important as tools and infrastructure. We are embracing and promoting a data culture mindset. MDEE is encouraging business and data owners across the company to onboard their data into the EDL. It can be challenging to buy into using a new platform and new processes, particularly when business owners and data owners feel like what they have is working for them. We commonly use a variety of methods, including communication campaigns and gamification, to drive early adoption at Microsoft. Measuring and reviewing daily and monthly active usage is also helpful during mid and late-stage adoption. MDEE has been encouraging adoption by providing evidence-based results that demonstrate adopting our modern data governance strategy can prevent root cause data issues.

Key Takeaways

Organizations have historically treated data governance as a set of processes, reactive measures, and guardrails that were applied to, yet separate from, the data itself. Creating data standards, engineering them into our processes, and moving data into the EDL with built in services for data management has provided measurable benefits in scaling Microsoft’s approach to governance.

From an IT perspective, Microsoft’s Enterprise Data Strategy helps control data sprawl and reduces infrastructure cost. It does so by limiting data copies and by better managing the data estate. For data owners at Microsoft, MDEE makes data easier to connect to and consume, while increasing trust in the data and the systems that host it.

We are realizing our vision for providing world-class modern data governance and effectively improving our data compliance posture by moving away from the traditional reactionary processes. We can now engineer data compliance into every part of the process—from applying embedded standards to new projects before collecting or storing data, to proactively scanning for issues as changes occur in the EDL. We are automating compliance measurement and reporting. That automation enables MDEE to provide evidence-based results to business process owners, suppliers of data, and data owners across the company.

Related links

The post Creating a modern data governance strategy to accelerate digital transformation at Microsoft appeared first on Inside Track Blog.

]]>
8667
How Microsoft used change management best practices to launch a new business intelligence platform http://approjects.co.za/?big=insidetrack/blog/how-microsoft-used-change-management-best-practices-to-launch-a-new-business-intelligence-platform/ Mon, 06 Feb 2023 17:41:59 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=7096 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. It was time for a fresh approach to data analysis at Microsoft, one that would make it easier to track sales and operations activities across regions and roles. […]

The post How Microsoft used change management best practices to launch a new business intelligence platform appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital technical storiesIt was time for a fresh approach to data analysis at Microsoft, one that would make it easier to track sales and operations activities across regions and roles. In addition to tool development, a dedicated change management effort was needed to inspire global adoption of a new, common methodology.

In 2020, the Microsoft Business Operations and Programs team in Microsoft Digital partnered with teams across the company, including Commercial Sales and Marketing, Worldwide Sales Engineering, Partner Seller Exchange, Microsoft Cloud Data Sciences, Microsoft Sales, Microsoft Finance, and others to deliver and drive adoption of a modern business intelligence reporting solution called MSX Insights (MSXi).

Today, MSXi serves as a single version of the truth for salespeople, managers, company leaders, operations, and finance teams across Microsoft. The change management process needed to reach this milestone started with a thorough assessment of the current state.

In the old system, despite an abundance of data, it was often a challenge for Microsoft’s leaders, the sales organization, and the Microsoft Finance team to align on key metrics. Each team’s reports used different data sources, making it hard to discuss risks and issues or have effective coaching conversations.

This duplication of effort and lack of automation made data analysis at Microsoft more costly than it had to be. As users routinely made multiple copies of data sets, compliance with data privacy and handling regulations and standards was also at risk.

“Wouldn’t it be great if we could have a common and standard way of looking at data, of drilling down into the business insights?” says Andrew Osten, senior director in Microsoft Digital for Microsoft Digital’s Employee Experience organization.

The MSXi project v-team agreed and began creating a solution that would meet the requirements of a range of users across the Microsoft ecosystem.

“What a leader might need is different from what a sales manager might need for their insights, or a front-line seller,” Osten says. “We asked, ‘what do they need to see and what behavior are we trying to drive?’”

The engineering challenges were significant. To solve them, the team chose a technical architecture based on Microsoft Azure technologies, including Azure Data Lake, Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, and Microsoft Power BI. MSXi was built on top of the Enterprise Data Lake, a shared data lake foundation central to the broader data strategy in Microsoft Digital. In addition to providing the needed data in a single secured, governed, and scalable location, this enabled the MSXi team to focus exclusively on building the needed analytics solution to help transform sales operations.

Convincing the users to switch to a single solution like MSXi wasn’t going to be easy.

“The primary obstacles were not technical,” says Rudy Neirynck, senior business program manager in Employee Experience. “Even at Microsoft, people do not just accept new business processes and go work the way you tell them to. Decision-making around the BI strategy and tactics is quite distributed at Microsoft, and many employees are great at creating reports with our company’s products.”

[Learn how Microsoft is powering its digital transformation with Modern Data Foundations. Learn how Microsoft improved data handling with a revamped business intelligence platformSee how moving Microsoft’s financial reporting processes to Microsoft Azure unlocked data valueFind out how Microsoft transformed sales with AI-infused recommendations and customer insights.]

Change management and telemetry

As the design of the MSXi solution took shape, the project v-team also had to address change management. This workstream refers to the cycle of communications, training, and reinforcement of a new framework, process, or structure.

“We implemented change management best practices from Prosci, including the valuable ADKAR model. ADKAR describes five important aspects of change: Awareness, Desire, Knowledge, Ability, and Reinforcement,” Neirynck says.

The Prosci ADKAR model describes five important aspects of change: Awareness, Desire, Knowledge, Ability, and Reinforcement.
Microsoft teams used the Prosci ADKAR model (Awareness, Desire, Knowledge, Ability, and Reinforcement) to identify potential blocks to adoption of the new business intelligence platform.

This model provides insight into where to focus resources to encourage the desired shift in behavior.

During development, Microsoft Digital business program managers ran pilot projects in all geographies, including the Latin America region and Australia.

“We tied telemetry into Microsoft listening systems to understand exactly what was happening,” says Juan Sarmiento, senior business manager in Microsoft Digital’s Employee Experience organization. “This helped to provide a data-driven view into adoption by people in the various sales roles, from leaders to managers and sellers.”

We had to drive the desire to find out what this solution has and what value it can bring.

—Andrew Osten, senior director, Microsoft Digital, Employee Experience

There were structured feedback channels in place for every audience.

“We wanted to keep the contact with the end-user as close as possible,” Neirynck says. “We asked the areas to triage issues first, then we had weekly or bi-weekly sessions to prioritize corporate and field needs.”

With this level of attention to detail and documented knowledge of the most common potential change blockers, the team was able to zero in to resolve the primary issues.

“We worked daily to identify the key blockers,” Osten says, “including lack of technical training and missing executive sponsorship. We had to drive the desire to find out what this solution has and what value it can bring.”

The team found the people driving reporting in each area and asked them to help champion the change. A key strategy was to ask about their challenges with metrics and listen carefully to the answers.

“Here’s a solution we’re developing, does it meet your needs?” Osten says. “What are your top three priorities that you can’t live without? They may already be covered. If not, let’s build a business case to get them in there.”

MSXi adoption is growing across the company

Today there are more than 30,000 users of this business intelligence platform, in 95 countries, across 14 regions. The monthly average usage rate for leaders is more than 80 hours, reflecting the team’s focus on gaining sponsorship among executives.

“It was near-zero when we started,” Neirynck says. “The manager number is up to about 50-60 hours a month. It starts with winning the hearts and minds of corporate and field leaders.”

Reporting on its own has limited value if it is not supported by a process. “If the leader is not integrating these instruments and insights into their day-to-day management of the business, it doesn’t fly,” Sarmiento says. A popular way to use the system reports is to support quarterly business reviews with standard key performance indicators (KPIs).

“We are transforming the company with data,” says Michael Toomey, senior director in the Microsoft Customer and Partner Solutions organization. “We are showcasing our solutions and inspiring the sales force at the same time. It is an amazing use of Microsoft Dynamics CRM, Azure Synapse, and Power BI.”

The demand for increasingly near real-time data from executives and managers has increased since the project started. “We never dreamed they would want the data refreshed so often,” Neirynck says. “At first it was daily, then every 12 hours, and now in some cases, it’s 10 times per day.”

What was most important for effective change management?

The close partnership between Microsoft Digital, the landing and adoption team, and engineering helped make this project successful. “Our people are well-trained, technically capable, and know the message of what’s in it for the field, for each role,” Neirynck says.

Another piece of the puzzle was to map the groups of stakeholders and provide each with a tailored, consistent message while being prepared for questions. The team tried to anticipate likely obstacles, documented its learnings, and deployed action plans in areas like delivery, adoption, and business readiness.

“We encouraged user dialogue around the KPIs, how they could gain business insights more consistently, and what metrics they need to see in each scenario,” Osten says.

It wasn’t just about raising awareness of the new platform. At first, the desire for the solution was low, partly because local teams could not verify the accuracy of the data. Some areas had created their own dashboards and didn’t want to make process changes.

“How and when do you start trusting this data in MSXi vs. what teams created offline?” Sarmiento says. “We took the time to define the data quality standards and process steps to overcome the perception that the data was unreliable.”

It’s a constantly changing environment. Have we continued to deliver the right thing at the right time for the right people?

—Juan Sarmiento, senior business program manager, Microsoft Digital, Employee Experience

At first, the team put more focus on executives and specialists. More roles were added over time.

“We identified issues with certain audiences who were not getting what they needed,” Neirynck says. “It was important to do a lot of monitoring of what people look at to make the right choices for our users.”

Improving platform functionality, accessibility, and process integration

Six months ago, the team went through the ADKAR process again to refresh on the needs of stakeholders and address advances in Microsoft’s business model.

“It’s a constantly changing environment,” Sarmiento says. “It never stays still, we are never there, it is a continuous process. Have we continued to deliver the right thing at the right time for the right people?”

Business analytics are part of an end-to-end process. Today’s tools are more and more interconnected, but people become overwhelmed when they have to use several systems to answer their questions.

“We created the Business Performance Management (BPM) framework to help align Microsoft teams with centrally defined BPM metrics and to simplify our current reports and insights,” Sarmiento says. “The BPM dashboard serves as the core, one source of truth for business performance across our leadership teams and all up for the Microsoft Customer and Partner Solutions organization.”

The change management strategy focused in incorporating a BPM Dashboard into the company processes to reduce the preparation time for rhythm of business meetings, bring together performance results against key outcomes and execution activities, provide prescriptive scenarios to drive actions and impact the bottom line, while leveraging machine learning algorithms to expose risks and propensity flags.

“The demand is growing to have the insights integrated with the user’s day-to-day environment,” Neirynck says. “How do you make sure the whole thing works in a fluid, connected way? This is the next dimension that we’re dealing with.”

Key Takeaways

  • Understand the needs of your user groups (e.g., leaders, managers, sellers), and think clearly on which behaviors you need to drive to incorporate it your change management strategy.
  • Consider how to integrate your analysis and reports into your business processes as a critical success factor for your business intelligence/BPM change management strategy.
  • Establish a robust listening mechanism and monitor the adoption of your analysis and reports continuously at the role level, to identify the groups that need more assistance, and where awareness, desire, knowledge, ability, or reinforcement actions are required.
  • Carefully check to make sure your solution meets your business needs and be sure to identify and remove your key blockers.
  • Be patient as Business Intelligence/BPM standardization are not easy and require continuous work and well-established connections between corporate and local teams.

Related links

The post How Microsoft used change management best practices to launch a new business intelligence platform appeared first on Inside Track Blog.

]]>
7096
Bringing Microsoft’s commerce platform to Microsoft Azure http://approjects.co.za/?big=insidetrack/blog/bringing-microsofts-commerce-platform-to-microsoft-azure/ Mon, 09 Jan 2023 17:00:52 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=9327 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. For almost 20 years, our Microsoft’s Commerce Transaction Platform (CTP) processed online payments through an on-premises environment, verifying that all transactions had been processed, sales had been finalized, […]

The post Bringing Microsoft’s commerce platform to Microsoft Azure appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital technical storiesFor almost 20 years, our Microsoft’s Commerce Transaction Platform (CTP) processed online payments through an on-premises environment, verifying that all transactions had been processed, sales had been finalized, and revenue reported. Commerce & Ecosystems (C&E), the team who manages the CTP, had an important question to answer—should they continue refreshing and building out the on-premises infrastructure or take the big step towards digital transformation and migrate the platform into Microsoft Azure?

The decision was made to bring CTP into the cloud, a change that meant we at Microsoft would see better performance, improved reliability, new monitoring capabilities, and an ability to scale in a cost-optimized way.

[Read more about boosting Microsoft’s transaction platform by migrating to Microsoft Azure. Explore moving Microsoft’s financial reporting processes to Microsoft Azure. Discover modernizing enterprise integration services using Azure.]

Microsoft’s eCommerce runs on our CTP

In order for our online business to grow, the CTP has to be available and reliably processing requests.

Most purchases of Microsoft Azure, Microsoft Office 365, Microsoft Dynamics 365, and several other consumer and commercial services, are powered by the CTP. If the platform goes offline, revenue loss is measured in thousands of dollars per second. In addition to recording online orders, the system is responsible for billing subscriptions.

Like most on-premises environments, our CTP follows a traditional refresh cycle, typically driven by warranty lifecycles. As machines and hardware are due to fall out of warranty, the C&E team evaluates their infrastructure, projects future needs, and research replacement options before systematically changing out the machines. This refresh cycle takes a minimum of six months, with the C&E team being careful not to disturb or disrupt the commerce platform.

In keeping up with the processing and storage needs of the CTP, C&E ends up purchasing bigger, faster, and more expensive hardware with each refresh cycle. The CTP runs on over 700 machines and stores over six petabytes of data in over 100 databases, relying heavily on the use of the Microsoft Distributed Transaction Coordinator (MSDTC), which is responsible for coordinating transactions across multiple resources. This makes replacement a major task. However, each refresh is also an opportunity to identify a better path forward.

A diagram showing the relationship of active and passive data centers which store and verify transactions.
Our CTP includes a network of storage devices to record and verify transactions. This improves response time and availability of data, but also made it difficult to move away from an on-premises environment.

Time to move to the cloud

When C&E was considering Microsoft Azure, it was already very popular at an enterprise level. Microsoft Azure is highly robust, introduces more flexibility, more computing options, and would have a lower maintenance cost for C&E. The team also had a vocal cadre of engineers throwing support behind the cloud platform, who were all eager to work on the latest technology.

Scaling was also on the table. In the on-premises environment, C&E had been required to procure enough machines to handle high volume surges, even though this capacity was an intermittent need. This meant a large number of physical machines would need to be procured to accommodate occasional spikes, only to remain dormant during low-traffic periods. Unlike an on-premises environment, Microsoft Azure can spin machines up and down as needed. This cost-efficient method for balancing out high and low system volume also meant C&E could procure and decommission virtual machines (VMs) in a matter of minutes, not months.

Considering these factors, the cost-benefit of renewing the on-premises machines versus moving to Microsoft Azure reached a tipping point, and Microsoft Azure was coming out on top.

Finding the right infrastructure for our CTP

For the migration to Microsoft Azure to be successful, C&E would need the cloud service to match or exceed the growing performance and storage needs of the transaction platform. This meant carefully examining the options available to the team, trying to identify the right approach while still being cost-aware.

Because of the need to scale up performance, the demand on Microsoft Azure machines would be high. Several brand-new virtual machine series had just been released, and they met those performance requirements, but C&E was reluctant to be one of the first customers. Time was not on their side; however, the warranties for CTP’s on-premises machines would be expiring soon. In the end, moving to Microsoft Azure was more important and C&E decided to act.

PaaS or IaaS?

Before C&E could migrate the CTP to Microsoft Azure, they had an important tech decision to make: would they use Platform-as-a-Service (PaaS) or Infrastructure-as-a-Service (IaaS)?

PaaS was the preferred option, especially after doing a feature analysis. In PaaS, our CTP would have more flexibility and an easy environment to operate in. Additionally, PaaS would require less maintenance, making it an improvement over the on-premises infrastructure.

But some of the legacy services needed for CTP to process transactions didn’t easily fit into PaaS. The team had to think through how specific our CTP needs would be addressed.

  • The CTP uses availability groups for providing high availability services
  • Transactional replication separates the front- end load from the back end one
  • MSDTC provides consistency for transactions spanning across multiple databases
  • Some database instances are bigger than 30 terabytes

This pushed C&E towards IaaS, which was closer to the on-premises environment. With IaaS, the team could have more direct control over operating systems and utilize native SQL features to support our CTP. This also meant there would be more moving pieces to manage.

Having settled on IaaS, C&E began evaluating the various performance needs across CTP’s different functions. With this information in hand, the team could begin work on finding the right service tier for their needs. Several options existed, but the primary candidates were:

  • Microsoft Azure SQL – Managed Instance
  • Microsoft Azure SQL – Virtual Machines
  • Microsoft Azure SQL – Hyperscale

In the end, C&E decided on Microsoft Azure SQL -Virtual Machines. With huge processing needs and a major requirement for scaling, Microsoft Azure SQL – Virtual Machines proved to be the best fit.

Selecting the right machine

C&E would need a high input/output operations per second (IOPS) and a fair amount of network throughput to support the CTP’s more demanding components.

The team immediately began testing different virtual machines against the on-premises environment, evaluating how each option performed compared to physical machines. Three tests were conducted, with each scenario representing a different process demand our CTP might require.

The tests sent different workloads through the virtual machines, starting with small block sizes, moving towards progressively larger requests. This benchmark helped C&E determine that the largest virtual machine, the M-series, would be needed to sustain some of their performance needs. However, the M-series was over-capacity for some of our CTP’s processes, making it an irresponsible choice.

Fortunately, being in IaaS gave them flexibility, allowing C&E to assign different processes to the appropriate virtual machine. The M-series would be used for anything that required high IOPS and throughput, the rest of CTP could function on the E-series.

Two tables, one breaking down Microsoft Azure VM specifications, another showing how machines performed in three benchmarking scenarios.
Microsoft Azure presented several different options for C&E. In order to meet all of our CTP’s performance requirements, the team performed several tests against usage scenarios.

Storing data in Microsoft Azure

C&E had been using a storage area network (SAN) infrastructure for storage. This hardware network is expensive to purchase and replace, but the benefit is high performance specifications—such as less than a millisecond response time—and improved availability. The team needed an equivalent in Microsoft Azure and narrowed it down to two candidates: ultra disks or premium SSDs. The ultra disks were the fastest option, and closely resembled a SAN, but were far more expensive. After testing, however, the premium SSDs matched the patterns of C&E’s existing SAN.

Dedicated to the cloud

Before migration could begin, C&E had to determine if CTP would be a dedicated host or an isolated virtual machine. In the end, they used both.

The M-series machines were needed for meeting a few core processing and throughput functions. However, they are only available as isolated virtual machines, which limited the types of machines available. Since the M-series were over-capacity for most of our CTP’s needs, C&E had to come up with a blended approach.

By running their virtual machines on a single-tenant server, Microsoft Azure Dedicated Host (ADH), C&E could mix and match the size of their virtual machines, a necessity for the custom virtual machine arrangement.

Being on ADH also posed some challenges. The C&E team would need to patch their own systems to align with their storage management approach. It also meant the team would have to select which regions and availability zones they would configure to. Fortunately, C&E understood how to configure the CTP in ADH without giving up availability or performance.

Splitting the platform between isolated and ADH, C&E could easily set up and manage the environment correctly, using the M-series to handle some of the high processing functions and a mix of machines on ADH for CTP’s other operating needs.

Managing a hybrid migration

With a new Microsoft Azure-based infrastructure in mind, C&E was able to begin work on moving our CTP over to the cloud.

C&E would take a hybrid approach to migration, relying on SQL to create a seamless transition between on-premises and Microsoft Azure. This side-by-side approach meant C&E could gradually shift our CTP away from a legacy environment without disturbing the growing business. It also enabled the team to compare customer experiences between the two environments and verify that Microsoft Azure was giving users the same results. This careful approach allowed C&E to strategically shut down on-premises machines and let the cloud take ownership of transaction processes.

Divide and conquer

Four primary components make up the CTP:

  • Online services. A layer exists between the UI/UX and the rest of the CTP. When a user clicks through their purchase, our CTP’s online systems interpret this signal as an input to verify the transaction.
  • Backend processing. The system responsible for handling subscription renewals in batches. This backend system triggers on a billing date, not a user input, to begin verification.
  • Data storage. Every transaction needs to be recorded somewhere. C&E relied on a powerful SAN with a very fast response time to reliably record transactions.
  • Revenue recognition. Without a way to recognize a transaction as reported revenue, then our entire CTP fails. The revenue recognition system supports that crucial process.

In segmenting our CTP into four operational segments, C&E was able to develop their migration strategy around systematically moving the platform into Microsoft Azure one component at a time. It also allowed the team to evaluate different performance needs, configuring their new environment to meet or exceed requirements. Each on-premises component was mapped to a suitable corresponding service in Microsoft Azure.

Making the move

Migrating our CTP’s four components required careful coordination, with the difficulty varying based on visibility and the ease at which the feature could be tested. A few of the services had shared infrastructure and overlapping components, which helped ease the transfer from on-premises to the cloud. Online services and revenue recognition, for example, were straightforward lift-and-shifts that were easy to test, as the team had immediate feedback if something wasn’t working.

Before shutting down the on-premises components related to backend processing, C&E had to fully mimic what was happening in a pre-production environment. A verification path was built between the two, which allowed C&E to slowly move backend processing jobs from on-premises to Microsoft Azure. It was a simple process for C&E, but they were rigorous with testing and examination to ensure no side effects were swallowed up by the move. This ultimately led to revamping the validation infrastructure to be suitable for Microsoft Azure.

Over six petabytes of data needed to be moved from on-premises SQL servers to the cloud. This was achieved by adding Microsoft Azure IaaS machines as secondary to existing SQL AlwaysOn Availability Group clusters, then migrating over components one by one. Initially, Microsoft Azure served as the primary during normal online transaction processing (OLTP) traffic, but once C&E was confident in the migration, Microsoft Azure became primary for backend job processing tasks requiring higher CPU, disk IOPS, and throughput as well.

Microsoft Azure Resource Management (ARM) templates were used to carefully control how objects moved from on-premises to the cloud. ARM enabled the team to easily provision, modify, and delete resources. We also copied backups from on-premises to Microsoft Azure, restoring them within virtual machines to establish a data sync. This enabled a seamless failover approach. When it was time to turn off the on-premises systems, C&E was confident that they had made the right decision.

Life in the cloud

For almost 20 years, C&E had relied on a variety of on-premises machines to run our CTP. If something went wrong, there was someone in their organization dedicated to solving the problem. By moving to Microsoft Azure, C&E doesn’t need to dedicate time and resources to troubleshooting—the cloud team does that for you. Still, the paradigm shift took some time to get used to.

Pressure testing in Microsoft Azure demonstrated that there was no data loss or inaccurate figures when users engaged the new cloud-based infrastructure. The Microsoft Azure team was responsive, engaging closely with C&E, carefully scrutinizing details to make sure the migration was a success.

The complexity of a hybrid environment was ultimately the biggest challenge, but it was a requirement of the migration. Now that our CTP is in Microsoft Azure, those issues are a thing of the past.

Cost-efficiencies for CTP

In addition to offloading infrastructure management costs to the Microsoft Azure team, C&E has actualized savings through system optimization and the elimination of hardware maintenance. On-premises servers and hardware continue to increase in price, but that burden has been offloaded.

Additionally, C&E is seeing savings in operational costs. While the team initially opted out of some upfront savings to get a better system, they’re finding ways to optimize processes to introduce cost savings.

It’s also important to note that services available through the native platform have reduced C&E’s dependence on third-party platforms.

Better performance and reliability

With the on-premises environment, C&E would experience a few issues each month. Automation and self-healing functions inside Microsoft Azure have reduced the frequency of disruptions significantly.

Microsoft Azure’s strong SLAs have better guarantees than C&E’s on-premises equivalent, giving our CTP a reliable foundation to operate on. The platform also benefits from improved monitoring capabilities made available through Microsoft Azure, giving the team greater visibility into what’s happening inside our CTP.

New features are on their way

Thanks to the native service features available to Microsoft Azure, C&E now has access to new features that can be quickly deployed, working right out of the box.

A path to PaaS

While the initial migration required C&E to utilize IaaS, the seamlessness of SQL means that our CTP can eventually be moved to a PaaS environment, as the team initially envisioned. This will introduce more flexibility, giving the team an even easier service to manage.

An easy way to scale

C&E can now double or reduce the number of virtual machines in a matter of minutes. Not only does this speed their response to high volume loads, it does so in a cost-optimized way.

Importantly, the C&E team didn’t need to downscale their needs. Microsoft Azure matched them.

Key Takeaways

In the end, C&E was able to secure better performance, improved reliability, scalability, and much more for our CTP by migrating to Microsoft Azure. A refresh cycle used to take six months or more, now it’s a matter of weeks. Decisions can be made quickly and with confidence, as the native environment allows features to work out of the box. As our CTP becomes more optimized within Microsoft Azure, new savings will be uncovered along with more performance opportunities.

Related links

The post Bringing Microsoft’s commerce platform to Microsoft Azure appeared first on Inside Track Blog.

]]>
9327
Streamlining vendor assessment with ServiceNow VRM at Microsoft http://approjects.co.za/?big=insidetrack/blog/streamlining-vendor-assessment-with-servicenow-vrm-at-microsoft/ Thu, 08 Dec 2022 22:06:36 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=9186 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. We’ve adopted ServiceNow Vendor Risk Management (VRM) to manage our risk assessment during the procurement process for Internet of Things (IoT) devices across Microsoft. ServiceNow VRM provides a […]

The post Streamlining vendor assessment with ServiceNow VRM at Microsoft appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital technical storiesWe’ve adopted ServiceNow Vendor Risk Management (VRM) to manage our risk assessment during the procurement process for Internet of Things (IoT) devices across Microsoft.

ServiceNow VRM provides a centralized, managed solution for assessing security risks for IoT devices and the vendors that supply them for us. With this solution, our vendor risk management processes at Microsoft are more automated and efficient, better monitored, and easier for our employees and vendors to use.

Introduction

At Microsoft, our business necessitates an extensive supply chain that depends on trusted non-Microsoft vendors. These vendors provide much of the hardware and software upon which we run our business. Our Microsoft security team ensures that our vendors and the hardware and software they provide adhere to our compliance and security requirements.

As part of our broader governance, risk, and compliance processes, the vendors and partners that supply these products and services must undergo an assessment of their operations and the products or services they supply. The security team provides technical expertise to confirm that software and hardware adhere to modern security practices. We have multiple business groups that work with the security team to assess vendors. Each business group has nuances that affect the way the security team creates and processes vendor assessments.

One such example is the IoT Security Assessment program. This program focuses on IoT devices procured and deployed throughout Microsoft. Each vendor and the product they supply must be vetted to maintain our security standards.

Improving the vendor assessment process

Globally, we at Microsoft manage thousands of IoT devices supplied by many different vendors. These devices include card readers, cameras, kiosks, and HVAC systems equipment. Each of these devices and the software that supports them must undergo the security assessment processes established by our security team. The basic assessment process includes the following three high-level steps:

  • Vendor questionnaire. This questionnaire provides business and technical data about each vendor and IoT device. The Microsoft employee responsible for procuring the device sends an assessment request to the security team, which then triages the request and sends the appropriate risk questionnaire to the vendor. The vendor completes the questionnaire, and then returns it back to the security team.
  • Preassessment Questionnaire. We use an initial pre-assessment questionnaire to determine the depth of review required for the solution. Based on the analysis, an in-depth questionnaire is then sent to the vendor to get detailed business and technical data about the device or solution.
  • Device-security test. After the vendor returns the questionnaire, the security team then performs security testing on the IoT device hardware and if applicable, software. Any issues are reported back to the vendor for correction.

In response to IoT Security Assessment process changes, including increased vendor data requirements, our security team had previously adopted a simple solution for tracking the assessment process. However, the volume of incoming requests and the detailed nature of IoT device assessments quickly surpassed the original solution’s capabilities, which were centered around file-based assessments exchanged through email and stored in a shared folder.

Setting goals for vendor assessment

The original solution was largely a manual process that involved potential for human error, lost data, and an untracked workflow. We realized that the IoT Security Assessment program needed a more robust and automated process for managing vendors and devices. To begin the workflow reinvention process, we established specific goals for the new solution:

  • Facilitate more secure IoT device data. The primary IoT Security Assessment program mandate is to ensure that IoT devices at Microsoft are secure. This high-level goal informed the research for the new solution and how we achieved more specific goals within the solution.
  • Minimize manual effort required for assessments. We wanted integrated automation wherever possible to reuse assessment components and reduce both manual effort and potential for error. We needed our security team focused on technical assessments and device vetting, not tracking emails and location of assessment forms.
  • Improve the vendor and Microsoft employee experience. In the original solution, both our vendors and our employees procuring IoT devices dealt with a complex set of workflow steps. Our goal for the new solution was an easy-to-use, simplified environment in which all assessment process steps could be more easily performed, tracked, and managed.
  • Enable self-service assessment creation and management. Each vendor and device assessment are unique, even if only slightly. We wanted to direct assessment creation and editing tasks to the employees who knew the vendor best and simplify tasks such as updating assessment details or adding questions.
  • Manage and track workflow communication. Our original solution contained too many untracked email messages that weren’t a traceable part of the assessment workflow. We wanted our new solution to better manage and track communication between the security team, our employees, and vendors.

Based on these goals, we researched available solutions. Ultimately, we decided on a solution from one of our trusted partners: ServiceNow Vendor Risk Management (VRM).

Simplifying vendor risk management with ServiceNow VRM

The ServiceNow VRM platform provides centralized management across the entire vendor assessment lifecycle process. It has built-in capability for:

  • Vendor portfolio management
  • Assessment management
  • Issue remediation
  • Risk scoring
  • Integrated monitoring and auditing of vendor risk management processes

We adopted ServiceNow VRM for the IoT Security Assessment program as a single tool to help us more securely engage vendors, assess supply chain risk, and follow IoT device security assessment through to completion.

With ServiceNow VRM, our entire vendor assessment process is hosted online in the ServiceNow VRM portal. Through this centralized portal, employees can create, manage, and assign assessments. Vendors can also use the portal to review incoming assessment requests and complete assessments. All parties involved can review the progress of assessments, receive notification when action is required, and perform necessary actions without switching tools. Improving visibility for the entire process means that both employees and vendors can check the status of assessments, issues, and tasks, and more quickly identify emerging risks.

Automated workflows in ServiceNow VRM improves collaboration. It also helps us establish consistent workflows and enables employees and vendors to reuse assessment components across products and devices.

ServiceNow integrates directly with our Microsoft Azure Active Directory (Azure AD) tenant to supply single sign-on (SSO) and multifactor authentication to the ServiceNow VRM portal. This capability complies with our security standards while providing a seamless sign-on process for our employees and our vendors.

Onboarding to ServiceNow VRM

In less than three months the IoT Security Assessment program transitioned from our original, manual solution to ServiceNow VRM. Our process started with defining our future requirements and ended with going live with ServiceNow VRM for all IoT security assessments. A quick migration reduced duplicate vendor management tasks in both the original solution and ServiceNow VRM, and it simplified the transition for employees and vendors.

Defining the schema for the vendor database records

Establishing a schema for storing data about vendors and devices helped us better understand assessment requirements. ServiceNow VRM integrates with ServiceNow IT Service Management (ITSM) to track and resolve vendor assessment issues and tasks. It also supplies the schema for vendor records, which directly affects the simplicity and accuracy of the integration and future IT Security assessments.

Configuring and testing vendor assessment forms

We use forms in ServiceNow VRM to create reusable assessment templates. All individual assessments are created using a form, which ensures consistency, reduces potential for human error, and reduces manual effort for assessment creation and management. We also perform all form and assessment tasks in the ServiceNow VRM portal, which creates experience continuity for our employees and security team members. Vendors simply complete individual assessments, which are then reviewed for validity. Assessment answers that require further attention or correction generate a prioritized list of issue records for the vendor to review and take action against.

Documenting and configuring notifications and reminders

We manage all assessment workflow communication within the ServiceNow VRM portal. We’ve customized communications for each of the Microsoft business groups using ServiceNow VRM, including the different assessment types used. All communication and handoff data are tracked, including which assessment is being performed, why it’s being performed, and who is responsible for the process.

End-to-end testing and pilot

Before deploying ServiceNow VRM to the larger group of IoT vendors, we ran a test pilot for the onboarding processes with a single vendor. We used this pilot to confirm processes, test end-to-end functionality, and make any necessary adjustments to our onboarding processes.

Benefits

Centralizing and automating our IoT vendor risk assessment process using ServiceNow VRM has vastly improved the end-to-end experience for our employees, vendors, and the IoT security team. Some of the most significant benefits include:

  • Manual effort reduced by more than 50 percent. The combination of issue generation rules, risk score calculation, and email templates have greatly reduced the manual effort required across our vendor assessment process. Our employees and vendors enjoy a more streamlined experience while our security team can focus on the technical aspects of the assessment rather than on process logistics.
  • Simplified communication. Access through the ServiceNow VRM portal means that all parties involved review and take part in the assessment process in real time and from a single interface. The number of messages sent between employees and vendors is greatly reduced while overall communication and visibility into the assessment process increases.
  • Better understanding of IoT security assessment health. Increased monitoring capabilities, accurate metrics, and complete auditing capability in ServiceNow VRM make it easier for us to understand exactly what’s happening in the assessment environment. We can instantly obtain important insights including ongoing assessments, completed assessments, repeated assessments, issues generated, and end-to-end assessment timelines.

Key Takeaways

Our IoT Security Assessment program is only the beginning of our process evolution. Here are the next steps that we will take on our journey:

  • Extend our ServiceNow VRM capabilities to include implementing a fully automated, no-touch assessment process for low-priority assessments, and vendor-tiering to calculate vendor risk-level.
  • Add automated IoT security risk data uploads to our ServiceNow VRM.
  • Bring the benefits captured by the IoT Security Assessment program to the rest of Microsoft, which will unify our vendor management processes.

Related links

The post Streamlining vendor assessment with ServiceNow VRM at Microsoft appeared first on Inside Track Blog.

]]>
9186
Building the future of retail with Adobe and Dynamics 365 http://approjects.co.za/?big=insidetrack/blog/building-the-future-of-retail-with-adobe-and-dynamics-365/ Wed, 07 Dec 2022 20:28:35 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=9171 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. Microsoft Store is reimagining its online storefront on microsoft.com with Adobe Experience Cloud, Microsoft Dynamics 365, and Microsoft Azure. We’re creating a more efficient content-management experience for our […]

The post Building the future of retail with Adobe and Dynamics 365 appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital technical storiesMicrosoft Store is reimagining its online storefront on microsoft.com with Adobe Experience Cloud, Microsoft Dynamics 365, and Microsoft Azure. We’re creating a more efficient content-management experience for our developers and a more effective, accessible, and intuitive interface for our customers. We’ve replaced an internally developed, custom solution with a suite of Adobe products that integrate with Microsoft Dynamics 365 Commerce, hosted on Microsoft Azure. This strategy positions us to adapt and grow Microsoft Store’s digital storefronts to meet the changing needs of our customers and business.

Online stores and our digital transformation

As Microsoft has grown as an organization, and as the sales and marketing industry technology has evolved, our retail business has expanded too, especially at microsoft.com. Microsoft Store on microsoft.com drives a large amount of our consumer and retail business and is the most common—and often the first—Microsoft retail experience for our customers. Though our strategic shift toward online sales began several years ago, this was accelerated by the COVID-19 pandemic.

As we plan for the future of our retail operations, our vision involves meeting customers where they are, in terms of where they want to shop, and developing greater efficiencies and agility so our business can respond to the rapidly changing state of retail business. However, this requires having the skills and technology to deliver optimal experiences at enterprise scale.

As part of our own digital transformation at Microsoft, we’ve established a marketing-technology strategy that emphasizes using our partner’s best-in-class tools and minimizing the custom tool development we perform internally. This means replacing legacy and bespoke marketing technology with the industry-leading capabilities, supplied by our strategic partnership with Adobe.

Integrating Adobe Experience Cloud and Microsoft Dynamics, running on Azure, provides us with the functionality and enterprise scale necessary to unify our business and its operations with a common set of marketing-technology solutions across Microsoft.

Improving digital storefronts

To further the digital transformation of Microsoft Store’s digital storefronts, we examined our in-place solutions to decide how we could best meet the needs of our business and our customers. We established several objectives for a reimagined solution, including that we needed to:

  • Accelerate digital transformation across our marketing technology stack. Technology solutions should grow and change with our business and keep pace with our business’s growth. Last year, aligned to supporting shifting customer needs, we announced a strategic change to our retail operations, shifting our sales and marketing approach to focus on online sales.
  • Increase platform agility. Our engineers and developers created our legacy platform, providing a custom design and comprehensive support. However, while this enabled custom-built functionality across many areas it meant we had to develop, test, and build new features for the platform with internal development resources. This resulted in considerable time spent by our developers and engineers, thereby increasing development and maintenance costs.
  • Unify storefronts with a consistent experience. Our goal is for customer experiences across all Microsoft Store digital storefronts to be consistent in branding, imagery, product descriptions, and navigation. However, we had built and grown our legacy system over the years, but often under the direction of multiple lines of businesses, each operating against a different set of rules, compliance risks, and accessibility standards. This led to varied functionality and data standards across our entire solution. We wanted our digital assets centralized, catalogued, and managed under a single interface and a set of standards.
  • Create a better onboarding experience. Efficient and connected workflows and toolsets are imperative for our employees. With our legacy platform, our team members were our experts, which isn’t ideal. If we brought in new employees, we had to train them from the ground up on the platform, which would be completely new to them, before they could start working on it. We wanted our new employees to be able to use our solutions quickly after onboarding and with minimal custom training or proprietary knowledge requirements.

Microsoft Store digital storefronts with Adobe Experience Cloud and Dynamics 365

To achieve our goals for a reimagined microsoft.com storefront, we’re using two powerful digital-business products. Adobe Experience Cloud provides our Microsoft Store digital storefront experience and melds seamlessly with the back-office capabilities of Dynamics 365 Commerce. Using Adobe Experience Cloud enables us to create scalable, personalized, content-led experiences, including:

  • Core content management. Adobe Experience Manager Sites is our core content-management system. It’s scalable and agile, and our content developers can leverage the same content to quickly produce and publish experiences across all channels. We can supply personalized experiences and use AI to manage and deliver relevant content and suggestions to our customers.
  • Digital-asset management. We use Adobe Experience Manager Assets to classify, catalog, and manage digital assets used across our microsoft.com storefront environment. Adobe Experience Manager’s complete digital asset-management functionality ensures we can automate asset tasks such as tagging and cropping, supply dynamic streaming media, and accurately build and curate our asset library.
  • Customer interaction and targeting. Adobe Target is our primary tool for AI-informed experimentation and personalization based on the entire spectrum of our customer’s interaction and preferences. This means we can improve and bolster customer experiences regularly and seamlessly.
  • Marketing campaign management. We use Adobe Campaign to deliver relevant information to our digital storefront customers, using the email channels and frequency that they indicate best suits them.
  • Analytics and reporting. Adobe Analytics enables us to capture important metrics and telemetry across the entire Microsoft Store digital-storefront environment, turning incoming data from our storefronts into actionable insights.

Adobe Experience Cloud’s extensibility was critical during our implementation. Many of our business needs and Microsoft Store digital storefront experiences required specific platform parameters and functionality. For those areas in which we require custom functionality to suit specific use cases, our developers can add integration and customization. But then we still benefit from a publicly used and commercially available platform that Adobe continually updates and supports. For example, we added custom integrations to support localization with multiple translation services. We’re also building automated workflows to reduce errors during the publication process and ensure adherence to organizational standards.

We’re using Dynamics 365 Commerce to unify our in-store, microsoft.com storefront, and back-office capabilities. Commerce provides a complete omnichannel solution that allows us to integrate existing in-store purchasing and shipping with our online stores. Commerce manages the bulk of our customer order-processing capabilities, including:

  • Product and service catalog management. We use Dynamics 365 Commerce to manage our entire product and services catalog for each of our Microsoft Store digital storefronts. Commerce enables us to use shared product definitions, documentation, and attributes across supply chains.
  • Order processing and fulfillment. Dynamics 365 Commerce gives our customers extensive choices for order processing and fulfillment, including delivery and in-store pickup. This integration simplifies processing and fulfillment for our retail business and creates a unified perspective for all order management.
  • Financials and payment services. We use Dynamics 365 Commerce to unify financial-reporting systems across both online and in-store retail transactions. Commerce also allows our customers to track and reuse payment methods across all retail interactions.

Launching Microsoft Store’s new digital storefronts

In April 2021, at our Singapore retail store, we launched Microsoft Store’s first digital storefront powered by Adobe Experience Cloud and Dynamics 365. We chose the Singapore location as our pilot project for several reasons. It had a wide range of features found across our stores, so it was a good representation of situations and considerations that we might experience with our other global spots. Singapore had lower traffic than many of our other stores, so we could implement the pilot on a smaller scale before expanding to larger customer bases.

The migration process for Singapore was dictated by an aggressive timeline that we needed to implement quickly and cleanly. We used the Adobe Experience Cloud content template and deployment functionality to effectively perform the migration on our live microsoft.com site for the Singapore store.

Building on the successful launch and positive results from our Singapore store, we shifted our focus to our online retail operation in the United States, which has a much larger traffic footprint and more complex retail scenarios. During this more complex migration, we leveraged the extensibility of Adobe Experience Cloud to streamline the migration and content-creation process, including:

  • Building a page-authoring automation tool that saved more than 6,500 hours of manual content creation needed for the migration.
  • Creating a content parser that crawled our product catalog to find missing content related to accessibility. We then fixed accessibility issues across our entire product catalog, preventing incidents in our production instance and saving hours of engineering efforts.

Key Takeaways

We successfully launched the new US microsoft.com storefront in August 2021 and are currently monitoring results. As we plan migration of our remaining stores, we’ll continue to refine our implementation processes and solution framework, powered by the flexible and vital partnership between Adobe Experience Cloud and Dynamics 365. We’re encouraged by our progress and the benefits we’ve experienced so far and anticipate creating new and compelling experiences for more of our customers soon.

Related links

The post Building the future of retail with Adobe and Dynamics 365 appeared first on Inside Track Blog.

]]>
9171
Understanding our business with app telemetry in Microsoft Azure http://approjects.co.za/?big=insidetrack/blog/understanding-our-business-with-app-telemetry/ Thu, 03 Nov 2022 17:44:53 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=8792 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. Microsoft Digital Employee Experience (MDEE), the organization that is powering, protecting and transforming Microsoft supports a wide variety of apps and services across the organization that are used […]

The post Understanding our business with app telemetry in Microsoft Azure appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital technical storiesMicrosoft Digital Employee Experience (MDEE), the organization that is powering, protecting and transforming Microsoft supports a wide variety of apps and services across the organization that are used to engage with customers, track leads, fulfill goals and objectives, and deliver products and services. The apps we support come from several different sources: Microsoft product-group development teams, third-party vendors, and our development teams within MDEE. While Microsoft Azure has native tools to monitor most application components, our app portfolio’s distributed nature makes it difficult to trace a customer’s experience along business processes, regardless of the underlying infrastructure. As the reach and responsibility of our app portfolio in Azure increases, so does the need for a more complete, concise picture of our end-to-end business process.

Striving to serve the customer’s needs

Like most large enterprises, MDEE supports a large and diverse app and service portfolio. Most apps and services have built-in logging and reporting for application, platform, and infrastructure components. However, many of our business processes require application data and business process-specific telemetry to complete the end-to-end user perspective. To effectively track business process dataflow in this environment, we need a standardized method for collecting data. After monitoring solutions collect data in a central location, we aggregate, combine, and manipulate that data to gain insights into user behavior and the end-to-end business process. The figure below presents an example of how application and business process telemetry are collected alongside application, platform, and infrastructure data to capture a complete business process workflow.

 

An end-to-end business process telemetry example for package tracking, showing a package going through several stages of the process
Capturing end-to-end business process telemetry.

We use telemetry-based data solutions to address several issues and needs that Microsoft employees and business groups report:

  • It can be difficult to understand information that is available without duplicating data or work. Employees use a wide variety of apps, which can lead to overlapping functionality, depending on the customer and product they work with. In some cases, it’s difficult for an employee to get 100 percent of the information about a business process without moving in and out of different apps and reporting systems.
  • Our app portfolio is diverse in design. These apps gather information by using different standards and taxonomies, making it difficult to compare and combine data from them in a meaningful way.
  • We want to extend monitoring and data-collection methods to include Azure-specific information and design methods that are pervasive in our app portfolio.
  • We need to track our app portfolio in a holistic manner rather than on an app-by-app basis. We want to examine and understand any business process’s health as it moves from app to app. For example, when a salesperson reviews an open deal opportunity in Microsoft Dynamics 365 for CRM, they must be connected to the details in the Customer Planning app to determine how the opportunity might affect the overall team’s goals or to take actions connected to the opportunity.

Creating a framework for custom telemetry

Telemetry is the first step in the journey to know our customer better. We understand that one of the most important factors in bringing telemetry data together from multiple sources is developing a common taxonomy to identify and label data across multiple systems. Our telemetry taxonomy is composed of a three-part schema:

  • Part A: System. These fields are defined by and automatically populated by the Logging Library on the local system where events are produced. There might be some fields that the Logging Library would need to get from the caller, but most of the time, the values populate automatically. Examples include Users, ClientDeviceIp, and EventDateTime.
  • Part B: Domain Specific. The different Part B schemas and the fields that they contain are defined by centralized groups. The event fields are populated by code written by the Event Author. The Event Author has no control over the field naming or data types. Examples include PageView and IncomingServiceRequest. These are reviewed and published by the Microsoft data and analytics team.
  • Part C: Custom Schema. These fields are defined by the Event Author, who has complete control over the fields’ naming and data type. We use a NuGet package that customizes this part of the schema. The package provides a telemetry base and the methods required to send data to the Azure App Insights telemetry client in the context of the custom schema.

The most important aspect of extending the telemetry environment across apps is establishing a common identifier for customers and transactions that existed in all apps. We developed a set of extensions that enabled us to create and maintain a common identifier we refer to as a correlation ID. The correlation ID allows us to pass information between apps for an employee or process so that data pulled from applications can be organized and displayed by that ID.

Extending apps for telemetry

To capture data effectively using the prescribed taxonomy, we created extensions to integrate into apps that were being developed. These extensions allowed us to set standards for telemetry data across multiple apps, which made it easier to query data and present it from the perspective of the customer experience. The extensions were created to be business-group agnostic. We’ve used them for sales apps, but they can be integrated into any app.

  • App Insight Extensions. Provides a standard way of propagating the correlation ID across different services to trace business processes that span different service boundaries.
  • Web Extensions. Provides templates to trace business process events and feature-usage events in a standardized way for web applications.
  • JavaScript Extensions. Provides templates to trace business-process events and feature-usage events in a standardized way for JavaScript.

For apps that can’t be extended, logging and telemetry data (log files, database info, and other data sources) are ingested from the application source location. This process is managed and executed by using Azure Data Factory to automate the ingestion process on an app-by-app basis.

Building telemetry in Microsoft Azure

Most of our app and process infrastructure is hosted in Microsoft Azure, so we knew that we wanted a solution that was also Azure based. Azure gives us the advantage of being instantly resilient, scalable, and globally available, and it has several components that we were able to use immediately in our telemetry solution.

Application Insights

Application Insights gives us the ability to monitor sales apps to help us detect and diagnose performance issues and retrieve the most important telemetry data from the sales environment. With Application Insights, we can analyze usage patterns and detect and diagnose performance issues within the sales app stack.

Microsoft Azure Data Factory

Microsoft Azure Data Factory is used to move and transform data. Azure Data Factory (ADF) makes it simple to move data between different sources of telemetry data and the central telemetry repository in Azure Data Lake Storage. We use ADF to transform and analyze incoming telemetry data (from Application Insights blob storage and custom SQL logs, for example) to prepare that data for Data Lake Storage processing and reporting consumption.

Creating the telemetry dataflow

While native telemetry constructs in Azure like Log Analytics and Application Insights can perform telemetry data management, we needed a custom data analytics solution with specialized telemetry and reporting to include our end-to-end business process environment. This solution’s telemetry architecture, represented in the figure below, includes the following components and steps that help to collect and present telemetry data for reporting:

  1. The telemetry extensions are built into apps or used to mine data from apps that don’t support the extension.
  2. The data from apps is pulled into Data Factory as raw data. Data Factory passes the data into Data Lake Storage.
  3. In Data Lake Storage, raw data is converted and transformed by using U-SQL, the native query language for Data Lake Storage, and put into common schema outputs and aggregation outputs.
  4. The data from these outputs is presented by using U-SQL for consumption by reporting and visualization tools like Microsoft Power Query or Microsoft Power BI.

 

Diagram depicting the telemetry architecture in Microsoft Azure, including Data Ingestion, Data Transformation, and Data Visualization
Telemetry architecture.

Providing accessible and meaningful data

We used several solutions to provide meaningful results to different teams across Microsoft. Microsoft Azure App Insights telemetry aggregation provides the core data for our dashboarding. We use Microsoft Azure Data Explorer dashboards for near real-time reporting and Microsoft Power BI dashboards to help our employees to gain deeper insight into their environment. With Power BI, we can create visualizations to represent data and trends in ways previously unavailable. For example, the graphic below depicts data flowing between several apps. The data flow extends beyond a simple app-to-app relationship to encompass the larger business environment and the 13 apps that the chart represents. Visualizations like this help our teams better understand some of the underlying behaviors and trends that affect their business.

A Microsoft Power BI dashboard chart.
An example of a telemetry dashboard in Microsoft Power BI.

Microsoft Azure Data Explorer

Microsoft Azure Data Explorer is a critical tool for near real-time analysis of our data. By using Data Explorer, our engineers can interactively explore and analyze data to troubleshoot issues, monitor infrastructure, and improve app components and customer experience. We use Azure Data Explorer to examine data in place for the Azure environment by using Kusto query language and Azure Data Explorer dashboards. Kusto query language allows our engineers to quickly access views and insights on live data in whatever format they need, while dashboards enable us to instantly save and visualize query results across engineering and user-experience teams.

Establishing a customer-focused culture

Traditional IT has typically focused on making technology work, while business teams do their best to work with the available tools and provide as much value as possible to their customers. With Azure, the extra time and effort we’ve saved by not having to deploy infrastructure and manage a traditional datacenter allows us to dedicate more resources to innovating and improving app development. Business process-focused telemetry can supply customer-focused insights, and we can access those insights by using built-in Azure tools and dashboards instead of building them first.

These development-process changes have taught us that our engineers can and should approach app development from the customer’s perspective. Our tools help employees know their customers and business better—and we learned more about the business and how to make customer-focused decisions during the development process.

Key Takeaways
We established several best practices while we developed the telemetry solution, including:

  • Scalable data storage is key. With telemetry, we’re collecting massive amounts of data. Some data is queried immediately, and some is queried less often or not at all. Regardless of how we use the data, we need a scalable data storage solution to accommodate the large influx of data.
  • A common schema is important. Consistent taxonomy makes it much easier to correlate data between apps and establish a consistent telemetry environment that provides a complete picture of business data. However, developing the common schema shouldn’t supersede data collection. It’s much easier to establish the schema and clean up data as it’s ingested, but it’s also possible to begin ingesting whatever telemetry or logging data you have, even if you don’t have a common schema. If the data is there, you can always transform it later.
  • Identify the insights that you want to get, then build the visualization. Practical business application is important. Don’t let the format and organization of your data dictate the insights you gain from it. Decide which business insights you want to expose and transform your data and telemetry collection accordingly.

Our telemetry solution for our app portfolio has provided new insights into how we run our business. Using a common schema and telemetry extensions has allowed us to bridge the data gap between our apps and gain a better perspective on our end-to-end business processes. Our employees are better informed and equipped to do their jobs, and we’ve developed a reusable telemetry solution that we can extend to other parts of our business.

Related links

 

The post Understanding our business with app telemetry in Microsoft Azure appeared first on Inside Track Blog.

]]>
8792
Adopting Microsoft Azure Resource Manager internally at Microsoft http://approjects.co.za/?big=insidetrack/blog/adopting-azure-resource-manager-for-efficient-cloud-infrastructure-management/ Wed, 02 Nov 2022 22:29:52 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=8735 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. The work we do here at Microsoft is powered by the cloud, something that wouldn’t be possible without adopting Microsoft Azure Resource Manager. We’re the Microsoft Digital Employee […]

The post Adopting Microsoft Azure Resource Manager internally at Microsoft appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital technical storiesThe work we do here at Microsoft is powered by the cloud, something that wouldn’t be possible without adopting Microsoft Azure Resource Manager.

We’re the Microsoft Digital Employee Experience (MDEE) team, and our efforts to power, protect, and transform the company is powered by Microsoft Azure. As such, we are continually looking for ways to deploy and manage our Azure resources in the most efficient way possible. One of the ways we’re doing that is with Microsoft Azure Resource Manager, which we’re using to enable agile cloud implementation and management in the midst of our digital transformation. It’s also helping us establish standard practices throughout our Azure environment.

Making Microsoft Azure the first choice for IT infrastructure

The cloud-first, mobile-first culture at Microsoft is designed to give our business groups the most effective IT environment possible. For our IT teams, this means being able to create that environment quickly, to the required scale, and in a cost-effective manner. Microsoft has championed a move to the cloud because it gives us the infrastructure we need to power the next generation of business applications. It also elevates collaboration and productivity, enabling our employees to be more successful.

Microsoft Azure is at the core of our cloud infrastructure. We are continually moving applications to the cloud, and Azure is the first choice for new IT solutions that we implement. We currently support the largest public, cloud-based corporate IT infrastructure in the world using Azure. Our Azure environment includes:

  • More than 1,700 active Microsoft Azure subscriptions.
  • More than 1,100 cloud-based applications.
  • More than 15,000 Microsoft Azure virtual machines.
  • More than 18 billion Microsoft Azure Active Directory authentications per week.
  • More than 30 trillion objects stored on Microsoft Azure.

Our business runs on Microsoft Azure, and we are dedicated to increasing our footprint in the cloud, migrating our on-premises to Azure. The sheer volume of infrastructure in Azure requires a management solution and approach that allows our IT teams to deploy and manage Azure resources in an efficient and timely manner.

Understanding Microsoft Azure Resource Manager

Microsoft Azure Resource Manager provides the framework for the resources used to create solutions in Microsoft Azure. It gives you a way to deploy and manage Azure resources as a solution. For example, for an application that is used to track sales records your solution might consist of several Azure virtual machines connected by Microsoft Azure Virtual Networks, and using a variety of other Azure resources.

The Microsoft Azure Resource Manager model involves several important components:

  • Resource groups. A resource group is a container that holds related resources for an application. A resource group can include all of your resources for an application or only those resources that are logically grouped together. You can decide how you want to allocate resources to resource groups based on what makes the most sense for your organization.
  • Role-based access control. You can add users to predefined platform and resource-specific roles and apply those roles to a subscription, resource group, or resource to limit access. For example, you can use the predefined role known as SQL DB Contributor that lets people manage databases, but not database servers or security policies. You add people in your organization that need this type of access to the SQL DB Contributor role and apply the role to the subscription, resource group, or resource.
  • Tags. You can use tags to categorize resources according to your managing or billing requirements. You might want to use tags when you have a complex collection of resource groups and resources that you need to visualize in the way that makes the most sense to you. For example, you can tag resources that serve a similar role in your organization or that belong to the same department. Without tags, people in your organization can create multiple resources that may be very difficult to identify and manage later. For example, you may want to delete all of the resources for a particular project, but if those resources were not tagged for the project, you will have to manually find them.
  • Templates. Templates define the deployment and configuration of your application. Templates provide a declarative way to define deployment. By using a template, you can repeatedly deploy your application throughout the app lifecycle and have confidence that your resources are deployed in a consistent state.
  • Policy. You create policy definitions that describe the actions or resources that are specifically denied in Azure. You assign those policy definitions at the desired scope, such as the subscription, resource group, or an individual resource. For example, you can use policy to control where resources are allowed to be created. Or you might want to control access to the resources by allowing only certain types of resources to be provisioned.

Using these components, the Azure Resource Manager model provides several advantages:

  • You can deploy, manage, and monitor all of the resources for your solution as a group, rather than individually handling these resources.
  • You can repeatedly deploy your solution throughout the development lifecycle and have confidence that your resources are deployed in a consistent state.
  • You can use declarative templates to define your deployment.
  • You can define the dependencies between resources so that they are deployed in the correct order.
  • You can apply access control to all services in your resource group because role-based access control (RBAC) is natively integrated into the management platform.
  • You can apply tags to resources to logically organize all of the resources in your subscription.
  • You can clarify billing for your organization by viewing the rolled-up costs for the entire group or for a group of resources that share the same tag.

Enabling agile cloud infrastructure

Our implementation of Microsoft Azure Resource Manager started with new implementations of Microsoft Azure solutions. We adopted Azure Resource Manager as the default model for all new solutions. New solutions transitioned smoothly to Azure Resource Manager, and most of our existing Azure infrastructure remained in the Microsoft Azure Service Management model. However, there were cases where we needed a solution to access resources from both models, and we also had to reconcile the way we monitored and managed resources from each model to provide a monitoring and management environment that is as unified as possible.

Using templates to simplify deployment and management

Microsoft Azure Resource Manager templates give you the opportunity to create standardized deployment and management processes that are reusable within the Azure Resource Manager environment. We used templates heavily throughout the Azure Resource Manager deployment process.

To store and deploy the templates, we used a GIT repository that publishes automatically to a GitHub repository, which is available externally. We used nested or linked templates that require access to a publicly accessible URI—in this case, GitHub—specifically, raw.githubusercontent.com. These repositories gave us the ability to store our templates centrally, to establish standards, and to provide a location where people could start their own deployments. By using these repositories, people were able to clone their own instance of the repository to their local machine for customization, modification, and testing before publishing changes. GitHub also gave us the ability to take advantage of our internal community contributions to the Azure Resource Manager template ecosystem. The two repositories that we used were:

  • Internal Microsoft Azure Resource Manager Repository. This is where we stored our standard, compliant, and parallel Azure Resource Manager templates that we support. These templates allowed us to specialize deployment for scenarios such as domain-joined virtual machines, standardize configuration of drives and storage, and create and implement common Azure tags on deployed resources.
  • Community Microsoft Azure Resource Manager Repository. This repository was open for internal Microsoft use, but we did not validate or check compliance. This repository allowed people to create their own templates and share them with the rest of the organization so that innovation and Azure Resource Manager development could be shared. This repository does not publish to GitHub.

Key Takeaways

Here are our best practices we learned implementing Microsoft Azure Resource Manager internally here at Microsoft:

  • We established several design and implementation practices that help us to create a consistent and uniform Microsoft Azure Resource Manager environment. With these best practices, we realized significant efficiencies in the deployment and management of Azure Resource Manager–based resources.
  • We established that Microsoft Azure subscriptions would be created for each service and named accordingly. This created a very detailed level of control over Azure resources. At Microsoft, a service is defined as a primary business function. There are currently more than 200 services, which often contain multiple applications and resource groups.
  • We established a services naming convention to provide consistency and continuity. In addition to the name of the service, our naming convention included using the company prefix of “MSFT” and a numeric suffix to allow multiple subscriptions for the same service if it ever becomes necessary. For example:
    • MSFT Customer Relationship Management 1
    • MSFT International Tax 1
    • MSFT Commercial Business Reporting 2
    • MSFT Network and Infrastructure Mgmt IT Solutions 3
  • We recommended that applications have their own dedicated resource group. Resource groups are created within each subscription, and each resource group contains objects or applications that share a similar lifecycle. Resource groups can have specific RBACs assigned—each resource group will have different users assigned different permissions. Virtual machines in a solution must be members of a single resource group, and there will often be multiple virtual machines in each resource group.
  • Use RBAC to establish permissions for groups. In our Microsoft Azure environment, RBAC is designed to establish role permissions for resource groups or subscriptions. We used our Active Directory environment and synchronized with groups in RBAC on Azure. We used the built-in roles to define permission levels and then used those roles to establish permissions for our internal RBAC groups.
    • Reader: Can view all Microsoft Azure components except secrets, but can’t make changes.
    • Contributor: Can manage everything except user access.
    • User Access Administrator: Can manage user access to Azure resources.
  • Using tags provides easy management across many scenarios. Tags were helpful in grouping resources for scenarios outside of the scope of typical application maintenance. We reserved a certain number of tags so that resources could be easily identified by change management database users. We also used tags extensively for tracking compliance and billing, which was useful when compliance or billing-related resources spanned multiple applications or resource groups.
  • Templates were an immense change to the way we created and managed Microsoft Azure resources. By using the JavaScript Object Notation (JSON) templates in Azure Resource Manager, we deployed and manage an entire solution—including virtual machines, virtual networks, and associated resources. By using templates, we defined how and when Azure resources were created or changed. We were also able to create dependencies and implementation requirements for complex applications with large, complex sets of Azure resources. Templates are hosted on GitHub, so people can innovate and develop templates.

Related links

The post Adopting Microsoft Azure Resource Manager internally at Microsoft appeared first on Inside Track Blog.

]]>
8735
Strengthening communications and improving employee experiences at Microsoft with Yammer http://approjects.co.za/?big=insidetrack/blog/yammer-strengthens-communications-and-improves-employee-experiences/ Mon, 24 Oct 2022 16:23:11 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=8698 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. With more than 150,000 employees and external staff, it’s vital that everyone here at Microsoft can effectively communicate with each other, regardless of time zone or location. Yammer, […]

The post Strengthening communications and improving employee experiences at Microsoft with Yammer appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital technical storiesWith more than 150,000 employees and external staff, it’s vital that everyone here at Microsoft can effectively communicate with each other, regardless of time zone or location. Yammer, our internal social network, is one of the Microsoft 365 applications that our employees rely on to connect at work.

Teams and individuals at Microsoft have been using Yammer since 2012 and appreciate the collaborative online environment in which colleagues can quickly learn from each other’s ideas and experiences. Leaders use it to provide clarity around goals and strategy, model company values, and build trust with their teams.

Yammer community members can participate in open and asynchronous discussions from anywhere, at times that are best for their schedules and priorities.

Yammer enables cross-company communication

Hundreds of active Yammer communities across Microsoft facilitate a daily flow of conversation that spans social, professional, and technical topics. Depending on a particular community’s purpose, its membership may be open to all Microsoft employees or limited to a specific group or role.

Communications and service managers in Microsoft Digital Employee Experience (MDEE) work together to roll out employee-facing applications, new product features, and changes in IT Systems. MDEE was formerly known as Microsoft Core Services Engineering and Operations (CSEO).

Because conversations are discoverable and persistent, information is easy to share and locate at any time. MDEE communications managers and service managers have found that Yammer helps them better understand global employee sentiment and trends. This provides timely input to product groups on Microsoft products and services before they’re released to customers. It also drives higher adoption, more effective usage of IT services and apps, and increased satisfaction around updates and changes.

One highly active Microsoft Yammer community is Ask MDEE, administered by MDEE communications managers. This community has a broad membership base that includes all Microsoft employees. In this community, anyone can ask questions, get support, connect to expertise and best practices, and provide feedback on any MDEE service.

We include a link to the Ask MDEE Yammer community in all employee emails so it’s easy to access and ask questions. Sometimes MDEE communications managers answer questions themselves, but subject matter experts or employees with experience in the topic often respond, which saves countless tech support hours. Ask MDEE has also decreased the number of per-capita support requests that come in as more people discover common questions and share information.

Community learnings have helped reduce resolution time on many issues. We closely monitor the Ask MDEE Yammer community to ensure that all posts are addressed within 24 hours.

Screen shot of a conversation where an employee is looking for assistance and a community member tagged an expert to assist.
A conversation in the Ask MDEE Yammer community.

Using Yammer to successfully land change

Yammer is a core part of MDEE’s internal change management strategy and helps monitor employee attitudes and engagement. As stewards for change, MDEE communications managers enthusiastically participate in a variety of Yammer communities besides the technology-focused groups they own.

With automated Microsoft PowerBI reports based on Yammer comments, MDEE communications managers can quickly spot and analyze emerging trends. When it’s time to roll out updates that are likely to have a significant impact on users or operations, we can craft messaging to the audience’s needs and adapt aspects of their messaging when they find further explanation is required.

Yammer also helps to close the feedback loop. It lets service managers see exactly what adjustments to make to the service being updated. In contrast to anonymous feedback tools, Yammer enables direct responses among individuals within company usage guidelines.

We have found that advance sentiment measuring helps us proactively address issues that could disrupt a project. While many employees post their own Yammer messages and comment on others’ posts, even more community members simply observe information shared on Yammer, thereby learning more than they would otherwise.

Recently, MDEE communications and service managers rolled out a Microsoft OneDrive for Business feature called Known Folder Move. This changed the way people save files at Microsoft so that they’re always backed up and protected. In addition to communicating the change to employees through targeted email messages, internal websites, and digital signage, we shared the news in Yammer communities.

Employees wanted to save documents and pictures to their file folder system the same way they always have, so the product group and MDEE enabled that functionality. Now, when people save their content via their local drive file path, the content is also saved to Microsoft OneDrive for Business automatically. Deployment proceeded with minimal workflow and employee disruption with the help of consistent messaging about this change across multiple channels.

A screen shot of an Ask MDEE conversation where a product expert is answering an employee's question about Known Folder Move.
Using Yammer feedback, Microsoft adapted the Known Folder Move feature and clarified messaging for employees.

Best practices for Yammer administration

With over seven years of experience using Yammer, we have learned the importance of having a plan for the setup and ongoing management of Yammer communities. MDEE communications managers follow and recommend these seven guidelines.

A picture containing a graphic that illustrates the seven best practices for setting up and managing Yammer communities.
Seven factors to consider for a thriving Yammer community.

Understand business objectives

First, identify the purpose of the Yammer community. This could be a single goal or a combination of several. Success metrics might include the number of active users, how many users posted about a topic, or the number of issue escalations.

At Microsoft, MDEE managers and others find Yammer communities helpful in modeling company values, bettering our understanding of employee sentiment, and building trust and engagement among teams and leadership. It also helps us drive the adoption of available apps and provides an opportunity to share valuable input with product development teams. Yammer has helped accelerate the time-to-resolution on support requests and improve overall user satisfaction around IT services and updates. Some best practices include:

  • Defining and documenting clear goals for the community—start with a broad objective and detail the milestones.
  • Developing consistent messaging and storing common answers in a FAQ.
  • Identifying quantitative and qualitative success metrics and tracking key performance indicators (KPIs).

Identify roles and responsibilities

As part of an overall project plan’s standard operating procedures (SOP) or responsible, accountable, consulted, informed (RACI) framework, project managers should first define the roles, responsibilities, and expectations for community administrators.

Take time to document how each role should engage with employees, including frequency and messaging. Determine which community channels to monitor and assign specific resources to manage them. Employees use the communications channels that work best for them, so be flexible and observe their actions and preferences. Community owners can close or rename channels as needed. Other best practices include:

  • Preparing the community administrators by explaining the escalation plan details in advance.
  • Creating a commitment to respond to posts within a certain timeframe.
  • Getting to know the community by introducing the administrators early and regularly responding to posts.
  • Using hashtags to mark posts for follow-up.

Set expectations with community members

Microsoft provides Yammer participants with specific usage guidelines to help maintain a welcoming, inclusive, and positive space for interaction. Let support staff and leadership know who is on point to respond to user posts, and how often.

Establishing desired response times and posting them in the Yammer group description is often helpful. Employees know when to escalate an issue to other support groups if they need a faster response. It also shows dedication to the customer experience. Consider creating a rotation schedule between team members to prevent confusion when urgent issues arise. Other best practices include:

  • Creating a usage policy that spells out the guidelines and acceptable behavior for community participants.
  • Using Yammer mobile to receive push notifications if a quick response or active listening is required.
  • Setting notification alerts for new posts on topics of interest.
  • Tracking response times against commitments.

We have also identified the following best practices for releasing services changes:

  • Breaking up the audience for program rollouts and measuring escalations by segment.
  • Rolling out changes in waves, depending on the number of users affected.
  • Creating ”pause criteria” to address issues each wave experienced before rolling out the next one.

Determine frequency of engagement

When posts contain misleading or inaccurate information, or there are posts about support questions or process confusion, community owners should direct users to the facts, the Community FAQ, or other support options as quickly as possible.

Listening tools, like Microsoft Power BI monitoring, help community owners act quickly if posts go viral. Responding early is best to quickly clarify issues that can otherwise escalate. After addressing the initial posts, check for any further direct requests for information. Remember, some people like public debate and don’t want or need a response to their post from the community administrators. Additional best practices include:

  • Calling out and correcting posts that become contentious and violate usage guidelines—it’s important to address those behaviors immediately.
  • Posting responses for all to see to help share knowledge and understanding among the community members.
  • Thanking each user for their post and always remaining factual and pleasant.

Have an escalation plan

When forming the project team, include key stakeholders such as the product group, support, or leadership. If a post requires an authoritative response, it can be escalated to one of these experts. Document the escalation paths and identify the subject matter experts to call for help when needed.

If you expect an increased volume of Yammer conversations, contact your corporate communications team early. It’s helpful to monitor topics that generate both negative and positive sentiment—even projects that elicit complaints can be valuable if the response is proactive and benefits the user in the long run. Other best practices include:

  • Checking with subject matter experts to confirm they’re willing to engage with users on Yammer.
  • Pulling experts into threads by @-tagging their alias or sharing the link to a post in need of response.
  • Identifying the tone of posts manually or with sentiment monitoring tools to decide if they require escalation.
  • Engaging in chats about company decisions rather than posting multiple paragraphs in a single response.

Extend reach through strategic connections

Yammer is just one of the communications platforms that can encourage employee engagement and readiness. By collaborating with communications leads in other parts of the organization, community owners can use Yammer to increase awareness of related services and initiatives.

Other key strategic partners might include people in the legal, privacy, or human resources departments. Engage with these stakeholders and corporate communications as early as possible to ensure that messaging aligns with other initiatives. More best practices include:

  • Connecting with other communications leads.
  • Building a network of partnerships to support communication.
  • Checking often to make sure these professional connections are still active and in the same role.

Systematize reporting process and measure KPIs

Every community is different and has unique success metrics. You can quickly analyze traffic and posts by selecting the usage report on the community page. For richer analytics, use Microsoft Power BI.

Tracking and documenting common themes can simplify reporting and inform live discussions. Sharing verbatim posts with leadership can help communicate sentiment and message tone so they’re prepared to answer any questions. Additional best practices include:

  • Being consistent in reporting, keeping focus areas clear, and using tools to show trends over time.
  • Calling out any watch list items in advance of expected Yammer activity.
  • Documenting common themes to help inform programming and employee/leadership Q&A sessions.

Key Takeaways

Yammer has already provided an incredible level of visibility into the everyday experiences of Microsoft employees and external staff in all regions. It’s fostering more compassion, trust, and patience in the company. Yammer has made it easier to bring more voices into decision-making and constructively expose incorrect information and assumptions.

MDEE communications managers continually update employees and community managers on new Yammer experiences as they become available. Recently, MDEE used Yammer to deliver these messages about feature enhancements and additional Yammer functionality:

  • Custom images. Yammer community owners can now personalize their groups with a logo or icon and cover photo.
  • Published posts. In response to feedback from Microsoft employees, the Yammer product team has created several plug-ins, apps, and web parts. You can now publish Yammer posts in other apps such as Microsoft SharePoint, Microsoft Teams, Microsoft Outlook, and Microsoft Stream, so they’re visible within multiple work processes.
  • Yammer mobile app. Yammer is the first external application developed with Microsoft’s new Fluent Design System, and offers a fast, modern interface on the web and on mobile devices.

MDEE plans to keep using Yammer to have meaningful conversations inside the operations team and with people across Microsoft. The service continues to help Microsoft bridge gaps in understanding, address misinformation, and cultivate an atmosphere of open dialogue.

Related links

The post Strengthening communications and improving employee experiences at Microsoft with Yammer appeared first on Inside Track Blog.

]]>
8698
Sweetening the first day for new Microsoft hires http://approjects.co.za/?big=insidetrack/blog/sweetening-the-first-day-for-new-microsoft-hires/ Wed, 28 Sep 2022 16:30:26 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=4103 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. Productivity is the name of the game at Microsoft. Not surprisingly, Microsoft strives for efficiency in everything it does, including the approach for an employee’s first day. Called […]

The post Sweetening the first day for new Microsoft hires appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital storiesProductivity is the name of the game at Microsoft.

Not surprisingly, Microsoft strives for efficiency in everything it does, including the approach for an employee’s first day.

Called New Employee Onboarding (NEO), the Microsoft onboarding experience is primed for disruption, says Robert Koester, a senior IT service operations manager in Microsoft Digital.

“We’re looking at how much friction there is between showing up for NEO and actually being productive,” Koester says. “We’ve been doing a lot of work to transform this experience.”

When new employees show up for their first day, there are several—OK, more than several—services and items they need to sign up for to get started.

It was too much.

“I noticed that people had to constantly look away, switching from one tool or application to the next,” Koester says. “Navigating through multiple tools and interfaces was counterintuitive, and having to click about everywhere was only adding to the confusion and aggravation.”

Making it worse, new employees were often asked to provide the same information multiple times.

Seizing the opportunity to enhance the new hire experience, Koester led the development of StepBar, a homegrown app that brings all the applications an employee needs to prepare for their first day into one place. “The idea is that procurement could have a laptop waiting, and with StepBar, a new hire could access all of the different tasks they need to accomplish through one simple interface,” Koester says.

With StepBar, signing up for passwords, PINs, emails, and Microsoft Teams is more easily managed. “This allows new employees to collaborate with their colleagues straight away,” he says. “This is something they want to do as quickly as possible.”

First day flavor

It’s a typical Monday morning in Building 92 at the Microsoft main campus in Redmond, Washington. The scents of handcrafted lattes and pastries sweeten the air, as does an air of excited nervousness. Guides ease this week’s new hires through a series of welcoming stations, including an inspiring keynote, laptop setup, and employee registration via StepBar. A Microsoft Teams channel is created for each set of new hires, a kind of gathering place where they can get their questions answered and talk with each other.

Like a lifeguard on the beachside, Koester keeps an eye on those gathered for the day’s session and the Teams channel. For the most part, the NEO village doesn’t need him and is self-sufficient. To him, this points to the success of the new Team’s channel.

The work of the Microsoft Digital team and the way it has transformed the onboarding process hasn’t gone unnoticed.

“What Robert and I are trying to do is keep the negative emotions down and keep the experience as positive as possible—and really obsess over and celebrate the new hires,” says Alexis Apostol, a learning and development consultant in Microsoft Global Learning and Development. She manages the company’s onboarding experience from a learning perspective. “The hope is to make new hires feel they can do their best work here.”

Along with wanting to make new employees feel safe and supported in their new workspace, a key goal of onboarding is to show new hires what Microsoft is all about.

“We really are an amazing launchpad for employees to—as we say—empower every person, and every organization on the planet to achieve more,” she says. “We want to show new employees we really live our mission.”

Empowering new hires has more tangibly measurable effects too, Apostol says. Data shows that an employee’s onboarding experience can directly affect whether they choose to leave right away or stay long-term. Employees that stay for the long haul are beneficial to the company, while those quick to depart can leave gaps on their teams and are typically associated with costing the company money.

Apostol says Microsoft has been striving to make joining the company easier and more intuitive. “Sometimes being new is hard, but at Microsoft new hires can be assured that we’re working tirelessly to transform their onboarding experience,” Apostol says. “We’re streamlining the onboarding experience and we’re also trying to make it fun and exciting.”

For more information on working at Microsoft, visit the Microsoft careers site.

The post Sweetening the first day for new Microsoft hires appeared first on Inside Track Blog.

]]>
4103
Powering IoT experiences at Microsoft http://approjects.co.za/?big=insidetrack/blog/powering-iot-experiences-at-microsoft/ Tue, 23 Aug 2022 18:34:04 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=11090 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. Microsoft’s Smart buildings use IoT-driven experiences to make life easier for users. To unify the thousands of IoT devices needed to power these experiences, Microsoft Digital created the […]

The post Powering IoT experiences at Microsoft appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft’s Smart buildings use IoT-driven experiences to make life easier for users. To unify the thousands of IoT devices needed to power these experiences, Microsoft Digital created the Digital Integration Platform.

Microsoft’s Smart buildings use IoT-driven experiences to make life easier for employees and guests. To unify the thousands of IoT devices needed to power these experiences, Microsoft Digital created the Digital Integration Platform.

Microsoft is always looking for ways to make employees’ lives more productive and enjoyable. In leveraging IoT sensors, the company can convert real-world data into user experiences, like wayfinding, hotdesking, and room occupancy. But these outcomes rely on thousands of sensors originating from different IoT devices and designed by different suppliers. And no two buildings have the exact same IoT services. Consequently, there is no easy way to power fast and consistent IoT-supported benefits for employees and visitors.

Fortunately, a new approach to integrated IoT device management allows Microsoft to standardize the process, introducing new efficiencies, seamless innovation, and positive outcomes for employees. The Digital Integration Platform (DIP) collects, processes, and shares inputs from all IoT devices and exposes necessary data in a uniform way, making it possible to support IoT-driven employee experiences at scale.

What IoT means for employee experience

As you enter a conference room, an occupancy sensor triggers, signaling a series of services and real-world events.

Based on this activity, a light turns on. Elsewhere, a kiosk marks the room as “in use.” Simultaneously, a colleague looking to reserve a workspace checks for vacancies from a mobile app and sees which conference rooms are currently available.

This ecosystem of sensors, services, and systems working in unison showcases some of the ways Microsoft uses IoT devices to create world-class environments. Microsoft Digital, the organization that powers, protects, and transforms Microsoft, aims to evolve more of its physical spaces into smart buildings: workplaces that combine IoT devices, automation, and the latest in cloud technology to enable modernization. Each connected innovation enhances the various user experiences, giving beneficiaries optimal conditions to stay productive.

Finding the right experiences for IoT

IoT-driven employee experience, like finding the nearest open workspace, is built on a wide array of sensors, devices, and services, all tied to a specific benefit or outcome. Wayfinding is one of many IoT-driven experiences Microsoft relies on to solve user problems or to simply make life easier for users. Productivity, movement, wellness, access—all these pillars are supported by IoT solutions. Microsoft’s Global Workplace Services team (GWS) and Microsoft Digital have a tight partnership to determine which IoT devices are needed to improve building operations and user productivity. By encouraging this dialogue, the company is able to design outcomes that can be supported by IoT. To get there, we conduct surveys, interviews, and research to match pain points to a solution.

Once the experience is mapped out, Microsoft can work with suppliers to identify the devices and sensors that make up the IoT ecosystem. But these IoT devices all come from different suppliers, express insights in different ways, and don’t always work well together.

Unifying the devices that power employee experiences

If there was a single global IoT supplier for devices, everything would be easier.

Due to a variety of circumstances, including building and device age, region, availability, and use, it’s impossible to procure IoT devices from a single source. Since each supplier provides slightly different sensors and devices for tracking similar real-world events, Microsoft Digital needed to create solutions and standards for integrating IoT systems. Thus, the DIP was born.

Consolidating a fragmented system

Before Microsoft had a centralized approach to exposing data with the DIP, hotdesking, occupancy density, and other experiences were a lot more difficult to create. IoT devices are provisioned from a variety of suppliers, which complicates the ecosystem of devices across Microsoft’s global smart buildings.

Each supplier’s device functions in a different way, creating variations into how data is generated and shared with Microsoft. Some suppliers push data to the cloud, and others use APIs that require Microsoft to request the data it needs. They can’t simply focus on one single complication or challenge. Instead, GWS must regularly adjust based on the IoT system they’re working with.

In order to be vendor agnostic, Microsoft Digital developed the DIP. By abstracting the data coming from the vendor, Microsoft Digital can later expose that data downstream, creating a centralized interface for employee experiences to engage with.

Building an integration platform

Instead of managing thousands of different sensors and devices that facilitate wayfinding, hotdesking, and occupancy throughout Microsoft smart buildings and campuses, Microsoft Digital relies on the DIP as an abstraction layer. The DIP gathers data and device telemetry into one place. By building a gateway with components and patterns, Microsoft Digital can enable productivity through common and familiar services—whether they be kiosks, mobile apps, or websites.

The DIP is the glue that ties the physical infrastructure with the digital world, and more specifically, that ties our buildings and IoT devices to our employee experiences. It also brings together the Microsoft services that power the platform, including Azure Digital Twins (ADT), Microsoft 365, Azure Maps, Time Series Insights, and Azure Data Lake.

With ADT, Microsoft Digital is able to create a digital model of the global enterprise, from the largest campus down to the individual occupancy sensor. This model sits at the heart of the DIP and is kept live and up to date through sensor telemetry flowing into IoT Hub.

The DIP supports multiple gateway models to integrate data from disparate subsystems. An IoT Edge gateway hosts Edge Modules bringing in data from previously siloed on-premises devices. Another IoT Edge module is used to integrate HVAC and other data from the Building Management System. Finally, a B2B gateway integrates telemetry from vendors with cloud-hosted infrastructure.

Once ADT is updated with new data, the platform ensures that the Azure Maps state is updated in real-time and reflected in the employee experiences. Updates also flow into Time Series Insights for real-time visualization and analytics.

: Graphic showing Microsoft’s Digital Integration Platform.
Figure 1. The Digital Integration Platform serves as a gateway between IoT devices, buildings, and employee experiences.

Managing security across the ecosystem

At Microsoft, security is always the top priority. This extends to IoT devices, where physical and digital security is a crucial selection criterion for GWS. Prior to installation, everything is network and hardware tested at a separate lab, verifying that devices meet Microsoft’s strict standards.

Once introduced into a building, Microsoft keeps its IoT devices isolated to its own network as part of its Zero Trust Networking strategy. Network segmentation isolates potentially vulnerable IoT devices, keeping all of Microsoft safe.

Rethinking how devices are onboarded

Today, onboarding devices across campuses can be time-consuming due to manual testing and supplier variability. In the future, Microsoft Digital will be able to automate device onboarding to the DIP.

Mass adoption of Azure IoT standards, including industry guidelines from the RealEstateCore, a consortium focused on technology features in real estate, will further transform IoT onboarding. Once standardized integration is introduced, GWS will be able to install, scan, then onboard IoT devices with less effort. Telemetry will then flow seamlessly from the devices, and engineers will no longer have to spend time building intermittent gateways to light up employee experiences.

Maintaining system health

In addition to onboarding new devices and sensors, there will be times when IoT systems go out of commission and are no longer supported by the IoT manufacturer. This is true for both existing and new buildings. To keep the IoT environment modern, GWS regularly replaces and upgrades devices.

Installed devices must be checked for compliance, device health, security, and continuing support.

Seamless integration of telemetry will eventually allow Microsoft to analyze IoT system health with automation, testing, and alerting at scale. When an error is detected in a device, a ticket can be created, pointing technicians in the right direction. Since one device can trigger multiple system health alerts, machine learning can be used to help separate and prioritize real issues from noise.

By ensuring that devices are maintained, renewed, and that incorrect data does not populate in employee experiences, Microsoft Digital is able to build confidence with users.

Transforming IoT data into employee experiences

With the DIP consolidating inputs through an abstraction layer, Microsoft Digital can now quickly and easily create employee experiences at scale. Within the DIP, the Microsoft Azure Digital Twins and Azure Maps components enable Microsoft Digital to create a consistent look, feel, and environment for users to interface with. This uniformity empowers productivity.

Bringing everything to life with Microsoft Azure Digital Twins and Azure Maps

Microsoft Azure Digital Twins allows Microsoft Digital to represent physical spaces and devices in a digital environment. Everything from temperature sensors and air quality sensors to cameras and occupancy sensors can be represented in Azure Digital Twins. The first stage of developing an employee experience is to build the digital twin. Digital Twin Definition Language (DTDL), part of the Azure Digital Twins modelling platform, helps describe the physical spaces where users will need a wayfinding application to help navigate a campus.

Integration between Azure Digital Twins and Azure Maps allows Microsoft Digital to visualize data. Once a physical space is represented, it’s possible to funnel information in from Azure IoT Hub.

After data from the DIP is connected, Microsoft Digital can enable employee experiences. This ecosystem of IoT devices and platforms also supports monitoring efforts, empowering Microsoft Digital to easily maintain system health at scale.

Controlling access

Does a user controlling the temperature in a room with an app have permission to do this? And how are they making this change? Through valid code? That’s one of the challenges Microsoft Digital has to address as more and more experiences roll out across campuses.

The objective is for employee experiences to be available to the right users at the right time, and controlling access keeps Microsoft and its users safe.

The future is a modern campus

No longer restricted by a complex IoT ecosystem, Microsoft Digital can now service employee experiences at scale, irrespective of the devices and sensors that facilitate those user benefits. Deploying experiences can be done through a standardized framework where devices can easily be monitored and maintained for security updates and device health.

All of this investment creates stronger employee experiences, but there’s a secondary benefit as well: it improves products like Azure Digital Twins and Azure Maps.

More smart buildings to come

Microsoft continues to develop and expand its real estate properties, including new smart buildings and upgrading existing infrastructure to meet the expectations of employees and visitors. While the scope and turnaround time of keeping pace is a challenge, Microsoft Digital sees an opportunity to install devices, solutions, and experiences that will support modern technology and innovation for the long run.

Different buildings will have different capabilities based on age, operating scenarios, and leased versus owned properties, but Microsoft Digital and GWS wants users to have positive outcomes. These efforts translate to engagement and increased productivity, with devices and experiences being launched and updated at scale.

Encouraging a standardized way to create employee experiences

The size and scope of Microsoft’s smart campuses, specifically how they integrate IoT devices to make lives more productive and enjoyable, is informing the way suppliers develop new technologies. Partnerships with industry groups, including the RealEstateCore, and innovation throughout Microsoft buildings help support industry standards, reducing the burden on other organizations that rely on an abstraction layer for integration. The Real Estate Core has launched DTDL-based models, establishing common ground for IoT suppliers and smart buildings. As more companies develop IoT systems that expose data in a uniform way, vendors will be able to better integrate with each other, allowing enterprises to build even greater employee experiences.

Becoming less reliant on the Digital Integration Platform

The DIP gives Microsoft Digital a way to expose data and work with disparate devices to create wayfinding, occupancy, hotdesking, and other employee experiences.

The post Powering IoT experiences at Microsoft appeared first on Inside Track Blog.

]]>
11090
How ‘born in the cloud’ thinking is fueling Microsoft’s transformation http://approjects.co.za/?big=insidetrack/blog/born-in-the-cloud-thinking-is-fueling-microsofts-transformation/ Wed, 22 Jun 2022 21:05:03 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=8196 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. Microsoft wasn’t born in the cloud, but soon you won’t be able to tell. Now that it has finished “lifting and shifting” its massive internal workload to Microsoft […]

The post How ‘born in the cloud’ thinking is fueling Microsoft’s transformation appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital PerspectivesMicrosoft wasn’t born in the cloud, but soon you won’t be able to tell.

Now that it has finished “lifting and shifting” its massive internal workload to Microsoft Azure, the company is rethinking everything.

“We’re rearchitecting all of our applications so that they work natively on Azure,” says Pete Apple, principal service engineer on the Microsoft Digital Employee Experience team. “We’re retooling to take advantage of all that the cloud has to offer.”

Microsoft spent the last eight years moving the internal workload of its 60,000 on-premises servers to Azure. Thanks to early efforts to modernize some of that workload while migrating it, and to ruthlessly removing everything that wasn’t being used, the company is now running about 8,500 virtual machines in Microsoft Azure. This number dynamically scales up to around 10,000 virtual machines when the company is processing extra work at the end of months, quarters, and years. It has less than 300 virtual machines on premises, most of which are there intentionally for support of physical labs. The company is now 99 percent in the cloud.

Now that the company’s cloud migration is done and dusted, it’s Apple’s job to craft a framework for transforming Microsoft into a born-in-the-cloud company. Microsoft Digital will then use that framework to retool all the applications and services that the organization uses to provide IT and operations services to the larger company.

The job is bigger than building a guide for how the company will rebuild applications that support Human Resources, Finance, and so on. Apple’s team has created a roadmap for how Microsoft will rearchitect those applications in a consistent, connected way that focuses on the end user experience while also figuring out how to get the more than 3,000 engineers in Microsoft Digital Employee Experience who will rebuild those applications to embrace the modern engineering–fueled cultural shift needed for this transformation to happen.

[Take a deep dive into the learnings, pitfalls, and compromises of Microsoft’s expedition to the cloud. Discover implementing Azure cost optimization for the enterprise. Explore how Microsoft is modernizing enterprise integration services using Azure.]

Move to the cloud creates transformation opportunity

Despite good work by good people, Microsoft Digital’s engineering model wasn’t ready to scale to the demands of Microsoft’s growth and how fast its internal businesses were evolving. Moving to the cloud created the perfect opportunity to fix it.

“In the past, every project we worked on was delivered pretty much in isolation,” Apple says. “We operated very much as a transaction team that worked directly for internal customers like Finance and HR.”

Microsoft Digital engineering was done externally through vendors who were not connected or incentivized to talk to each other. They would take their orders from the business group they were supporting, build what was asked for, get paid, and move on to the next project.

“We would spin up a new vendor team and just get the project done—even if it was a duplication or a slight iteration on top of another project that already had been delivered,” he says. “That’s how we ended up with a couple of invoicing systems, a few financial reporting systems, and so on and so forth.”

Lack of a larger strategy prevented Microsoft Digital from building applications that made sense for Microsoft employees.

This made for a rough user experience.

“Each application had a different look and feel,” Apple says. “Each one had its own underlying structure and data system. Nothing was connected and data was replicated multiple times, all of which would create challenges around privacy, security, data freshness, etc.”

The problem was simple—the team wasn’t working against a strategy that let it push back at the right moments.

“The word that the previous IT organization never really used was ‘no,’” Apple says. “They felt like they had no choice in the matter.”

When moving to the cloud opens the door to transformation

The story is different today. Now Microsoft Digital has its own funding and is choosing which projects to build based on a strategic vision that outlines where it wants to take the company.

“The conversation has completely shifted, not only because we have moved things to the cloud, but because we have taken a single, unified data strategy,” Apple says. “It has altered how we engage with our internal customers in ways that were not possible when everything was on premises and one-off.”

Now Microsoft Digital engineers are working in much smarter ways.

“We now have agility around operating our internal systems that we could never have fathomed achieving on prem,” he says. “Agility from the point of view of elasticity, from the point of view of releases, of understanding how our workloads are being used and deriving insights from these workloads, but also agility from the point of view of reacting and adapting to the changing needs of our internal business partners in an extremely rapid manner because we have un-frictioned access to the data, to the signals, and to the metrics that tell us whether we are meeting the needs of our internal customers.”

And those business groups who unknowingly came and asked for something Microsoft Digital had already built?

“We now have an end-to-end view of all the work we’re doing across the company,” Apple says. “We can correlate, we can match the patterns of issues and problems that our other internal customers have had, we can show them what could happen if they don’t change their approach, and best of all, we can give them tips for improving in ways they never considered.”

Microsoft Digital’s approach may have been flawed in the past, but there were lots of good reasons for that, Apple says. He won’t minimize the work that Microsoft Digital engineers did to get Microsoft to the threshold of digitally transforming and moving to the cloud.

“The skills and all of the things that made us successful as an IT organization before we started on a cloud journey are great,” he says. “They’re what contributed to building the company and operating the company the way we have today.”

But now it’s time for new approaches and new thinking.

“The skills that are required to run our internal systems and services today in the cloud, those are completely different,” he says.

As a result, the way the team operates, the way it interacts, and the way it engages with its internal customers have had to evolve.

“The cultural journey that Microsoft Digital has been on is happening in parallel with our technical transformation,” Apple continues. “The technical transformation and the cultural transformation could not have happened in isolation. They had to happen in concert, and to a large extent, they fueled each other as we arrived at what we can now articulate as our cloud-centric architecture.”

And about that word that people in Microsoft Digital were afraid to say? They’re saying it now.

“The word ‘no’ is now a very powerful word,” Apple says. “When a customer request comes in, the answer is ‘yes, we’ll prioritize it,’ or ‘no, this isn’t the most important thing we can build for the company from a ROI standpoint, but here’s what we can do instead.’”

The change has been empowering to all of Microsoft Digital.

“The quality and the shape of the conversation has changed,” he says. “Now we in Microsoft Digital are uniquely positioned to take a step back and say, ‘for the company, the most important thing for us to prioritize is this, let’s go deliver on it.’”

Related links

The post How ‘born in the cloud’ thinking is fueling Microsoft’s transformation appeared first on Inside Track Blog.

]]>
8196
Shaping Microsoft’s new campus of the future with user-centric design http://approjects.co.za/?big=insidetrack/blog/shaping-microsofts-new-campus-of-the-future-with-user-centric-design/ Thu, 16 Jun 2022 22:39:35 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=8179 This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft. Those planning a visit to Microsoft’s transformed campus in Redmond, Washington, will witness the early stages of a seamless employee and guest experience, one made possible thanks to […]

The post Shaping Microsoft’s new campus of the future with user-centric design appeared first on Inside Track Blog.

]]>

This content has been archived, and while it was correct at time of publication, it may no longer be accurate or reflect the current situation at Microsoft.

Microsoft Digital storiesThose planning a visit to Microsoft’s transformed campus in Redmond, Washington, will witness the early stages of a seamless employee and guest experience, one made possible thanks to user-centric design.

This Design philosophy puts the user—an employee or a guest—at the heart of every decision and aligns all the facility’s services—physical and digital—to the needs of people. With this approach, seemingly mundane events and tasks that might normally cause friction in a person’s day are smoothed out as much as possible.

How are we doing this?

By turning the broad and diverse array of microtasks—the numerous operations of varying effort that allow you to accomplish the larger and more impactful parts of your job—into a consistent and logical end-to-end experience.

With us moving to a new campus, there’s a way to make it more connected, consistent, and accessible. By being user-centric, we allow campus to work for the user rather than the user having to work around the campus. People can be efficient and do the things they need to do.

—Dave Crawford, director of product design, Microsoft Digital Employee Experience

This new experience is taking shape at Microsoft’s transformed 72-acre east campus, which, when it opens in late calendar year 2023, will feature 17 new buildings, a 2.8 million square foot underground parking garage, and three athletic fields. At this major new part of our headquarters, we will employ user-centric design to smartly connect the normally disconnected services a person uses to get through a typical day. By doing this, we’re transforming mundane tasks into experiences that will make it worth a trip to the new campus.

[Learn about Microsoft’s upgraded transportation experience in Puget Sound. Find out how Microsoft’s dining transformation is easing its employees transition back working in the office. Explore how Microsoft is reinventing employee experience for a hybrid world. Discover how Microsoft’s campus enables navigation with new IoT technology and indoor mapping.]

Making our day seamless

Our daily lives are filled with microtasks, including getting to work, finding a parking spot, tracking down a meeting across campus, and ordering lunch. All of these tasks seem menial, but they’re still a necessary part of your day.

And it all adds up, which is why we took a different approach for our latest project.

“With us moving to a new campus, there’s a way to make it more connected, consistent, and accessible,” says Dave Crawford, director of product design with Microsoft Digital Employee Experience, the organization that powers, protects, and transforms the company. “By being user-centric, we allow campus to work for the user rather than the user having to work around the campus. People can be efficient and do the things they need to do.”

Microtasks are often filled with pain points, little headaches that require a tad more brain power than we’d like to expend. For example, you may have one system to buy lunch, a different one to locate a meeting room, and a third for reserving a Microsoft Connector bus that gets you from home to the office and back.

This myriad of services can become a bit much, especially for new employees or visitors.

“It used to be ‘Take a building, fill it with nice things, and people will figure it out,’” Crawford says. “That’s not how it works anymore. Mobile consumer apps have made our personal lives much easier—be that finding a ride, navigating the world, or ordering food. We expect that kind of convenience and simplicity at work now as well.”

Rather than leverage disconnected systems, user-centric design tries to reduce the burden by introducing consistent and logical flow between services. This makes it easier for people to access services, learn how to use them, and then actually put them to good use.

We’re not focused on one tool; we’re looking at the overall experience. What should we adjust to make the whole day better overall? Not just one part. We found problems at all the different tasks that lead to an effective day.

—Greg Saul, UX designer, Microsoft Digital Employee Experience

It’s the same reason why Microsoft embraces coherent design across its products, where a similar look and feel, along with familiar usage patterns, empowers quick adoption. User-centric design enabled Microsoft to look at the journey a person takes as they travel to and across a Microsoft campus then apply the same coherent design principles. The end result was consistencies among different systems, including interfaces with enough in common to make campus services easy to learn and use.

But user-centric design also means building out new backend experiences to support these services.

“There are so many scenarios and surfaces to account for; we have to maintain a realistic vision,” Crawford says. “It’s not as simple as an app with a list of dishes, we need to consider the entire end-to-end experience, from the backend to enter the menu data, to browsing and showcasing, all the way to payment. We have the technology, but we also have to make it approachable and usable.”

What an end-to-end experience looks like

Elevating a visit to Microsoft’s new campus means giving people fast, efficient, and seamless experiences. This includes apps for mobile, kiosks decked out with core experiences, and websites, all with the same functionality, interface, and services.

We will have a 6,500-stall underground parking garage with 17 buildings above it. The space is so big and there’s no line of sight, so you’ll need to use digital tools to find your car or to make sure you come up to the surface at the right location.

—Paul Egger, regional digital transformation lead, Microsoft Real Estate and Facilities

“We’re not focused on one tool; we’re looking at the overall experience,” says Greg Saul, a UX designer with Microsoft Digital Employee Experience. “What should we adjust to make the whole day better overall? Not just one part. We found problems at all the different tasks that lead to an effective day.”

The team initially conducted extensive global research on the journeys employees take throughout their workday, which revealed areas of the employee experience within our physical buildings that could be greatly improved and enhanced by overlaying digital elements.

“We conducted a series of studies leveraging many, many research methodologies—these ranged from focus groups, diary studies, surveys, and interviews with hundreds of employees across the globe,” says Ashley Graham, director of user research with Microsoft Digital Employee Experience. “Our research showcased numerous ways we could improve our employees’ daily journeys in the workplace through digital tools that could be integrated in services and buildings that our employees and visitors use.”

This research then enabled the team to assemble a roadmap to ensure employees and visitors can achieve their goals when they visit our campus. That roadmap is a prioritized list of everything that an employee or guest is going to engage with, in order to take care of the things they came to the campus to do.

And the roadmap starts with a user’s trip to campus.

“We will have a 6,500-stall underground parking garage with 17 buildings above it,” says Paul Egger, a regional digital transformation lead with Microsoft Real Estate and Facilities, the organization responsible for managing and operating the buildings and services across Microsoft. “The space is so big and there’s no line of sight, so you’ll need to use digital tools to find your car or to make sure you come up to the surface at the right location.”

We’re all used to using apps for mobility and food. We want that for Microsoft employees as well. We want a shared experience that can be leveraged across different campuses without having to re-learn the UI.

—Suma Uppuluri, principal group engineering manager, Microsoft Digital Employee Experience

Sensors will keep track of stall availability on each floor, directing users to open spots. Once you’re parked, an app on your phone will know where your car is. In the future, the mobile app will integrate with your Outlook calendar to locate the best area for parking relative to where you’re trying to go.

For those who take Connectors and shuttles into the office or to move around our campuses, new systems for booking trips to and around campus have been deployed. This new ride reservation system will have the consistent look and feel of other services and can be accessed from a variety of endpoints, including kiosks in every lobby. Digital signage alerts riders of the next arriving vehicle, which takes the confusion and stress out of getting around campus.

“We’re all used to using apps for mobility and food,” says Suma Uppuluri, a principal group engineering manager responsible for movement and wellness with Microsoft Digital Employee Experience. “We want that for Microsoft employees as well. We want a shared experience that can be leveraged across different campuses without having to re-learn the UI.”

And once you’re on campus, how do you know where to go?

In the past, the campus might have felt like a maze of numbered buildings that must be navigated by signage alone. It was a common source of stress and a wasted use of energy.

Pathfinding across Microsoft’s new campus will also be part of this seamless user-centric design experience, where mobile apps and kiosks can direct you where to go, inform you if the person you’re meeting with has arrived, and help guests check in on their own.

“If they’re late, you can’t start the meeting,” Saul says. “If every meeting is delayed ten minutes, it’s harder for anyone to get anything done.”

And, as the team is designing these apps and kiosks, it collects feedback from employees along the way. Design and product management partner closely with user research to test and evaluate early-stage prototypes and concepts with employees to further refine them and help ensure a usable and useful campus experience is delivered.

User-centric design touches upon every aspect of a person’s day. Meals can be ordered ahead so that they’re ready when you are, workspaces can be easily booked, and when you’re done for the day, the same systems that got you to campus will help you get home.

A seamless future at Microsoft

Transforming a variety of services into a seamless end-to-end experience meant bringing together stakeholders from across Microsoft to align on a single mission.

“It’s about all the teams building a vision together and creating a connected experience,” Saul says. Approaching this as one team is helping us solve the user’s problem.”

These new implementations will also make life better for the people who own and operate the services. The reduced occupancy on campus due to the pandemic and flexible hybrid work environment gave Microsoft an opportunity to rethink the technology and bring in new efficiencies.

“We’re returning to the office from a fully digital remote experience,” Uppuluri says. “Everyone is looking to see how our new hybrid work experience will match up too.”

Early on, the results are good. “What we’re working toward is a best-in-class workplace that takes the best elements of working remotely and working in the office,” she says.

Every day, new experiences that improve campus life are being deployed across Microsoft that empower employees to do more while reducing the burden of microtasks that they have to complete.

This will be one of the reasons people choose hybrid over fully remote.

“You should like being at Microsoft,” Egger says. “We should take the pressure and thought of being here away so that you can focus on the things you want to.”

As Microsoft moves forward, user-centric design choices made on the new campus will be deployed at other campuses, allowing Microsoft to scale great experiences for employees and visitors everywhere.

Key Takeaways

  • Before you get started, use user research to understand the pain points and obstacles that prevent your people from being productive and happy in their daily work. This will help you understand employees and not make assumptions.
  • User-centric design puts people and their journey at the center of your decisions. Map out what a person’s day might look like, tell the story of which microtasks they encounter on their way to impactful efforts and determine how to minimize the burden.
  • It also means soliciting feedback from users. Don’t assume the experiences you create solve the burden without first engaging with the people who will be interacting with them. Once you understand what their needs are, iterate before making final decisions on the user experience you build.
  • A project of this size requires several stakeholders across a variety of services. Spend time aligning on the vision; one team being in disagreement will disrupt the entire approach. Using an established roadmap can help here.
  • Accessibility is paramount to user-centric design. Make sure you’re building your services with everyone in mind.

Related links

 

The post Shaping Microsoft’s new campus of the future with user-centric design appeared first on Inside Track Blog.

]]>
8179