security Archives - Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/tag/security/ How Microsoft does IT Thu, 14 Nov 2024 23:08:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 137088546 Getting the most out of generative AI at Microsoft with good governance http://approjects.co.za/?big=insidetrack/blog/getting-the-most-out-of-generative-ai-at-microsoft-with-good-governance/ Fri, 01 Nov 2024 17:43:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=12391 Since generative AI exploded onto the scene, it’s been unleashing our employees’ creativity, unlocking their productivity, and up-leveling their skills. But we can fly into risky territory if we’re not careful. The key to protecting the company and our employees from the risks associated with AI is adopting proper governance measures based on rigorous data […]

The post Getting the most out of generative AI at Microsoft with good governance appeared first on Inside Track Blog.

]]>
Microsoft digital stories

Since generative AI exploded onto the scene, it’s been unleashing our employees’ creativity, unlocking their productivity, and up-leveling their skills.

But we can fly into risky territory if we’re not careful. The key to protecting the company and our employees from the risks associated with AI is adopting proper governance measures based on rigorous data hygiene.

Technical professionals working within Microsoft Digital, our internal IT organization, have taken up this challenge. They include the AI Center of Excellence (AI CoE) team and the Microsoft Tenant Trust team that governs our Microsoft 365 tenant.

Since the widespread emergence of generative AI technologies over the last year, our governance experts have been busy ensuring our employees are set up for success. Their collaboration helps us ensure we’re governing AI through both guidance from our AI CoE and a governance model for our Microsoft 365 tenant itself.

{Learn how Microsoft is responding to the AI revolution with a Center of Excellence. Discover transforming data governance at Microsoft with Purview and Fabric. Explore how we use Microsoft 365 to bolster our teamwork.}

Generative AI presents limitless opportunities—and some tough challenges

Next-generation AI’s benefits are becoming more evident by the day. Employees are finding ways to simplify and offload mundane tasks and focus on productive, creative, collaborative efforts. They’re also using AI to produce deeper and more insightful analytical work.

“The endgame here is acceleration,” says David Johnson, a tenant and compliance architect with Microsoft Digital. “AI accelerates employees’ ability to get questions answered, create things based on dispersed information, summarize key learnings, and make connections that otherwise wouldn’t be there.”

There’s a real urgency for organizations to empower their employees with advanced AI tools—but they need to do so safely. Johnson and others in our organization are balancing the desire to move quickly against the need for caution with technology that hasn’t yet revealed all the potential risks it creates.

“With all innovations—even the most important ones—it’s our journey and our responsibility to make sure we’re doing things in the most ethical way,” says Faisal Nasir, an engineering leader on the AI CoE team. “If we get it right, AI gives us the power to provide the most high-quality data to the right people.”

But in a world where AI copilots can comb through enormous masses of enterprise data in the blink of an eye, security through obscurity doesn’t cut it. We need to ensure we maintain control over where data flows throughout our tenant. It’s about providing information to the people and apps that have proper access and insulating it against ones that don’t.

To this end, our AI CoE team is introducing guardrails that ensure our data stays safe.

Tackling good AI governance

The AI CoE brings together experts from all over Microsoft who work across several disciplines, from data science and machine learning to product development and experience design. They use an AI 4 ALL (Accelerate, Learn, Land) model to guide our adoption of generative AI through enablement initiatives, employee education, and a healthy dose of rationality.

“We’re going to be one of the first organizations to really get our hands on the whole breadth of AI capabilities,” says Matt Hempey, a program manager lead on the AI CoE team. “It will be our job to ensure we have good, sensible policies for eliminating unnecessary risks and compliance issues.”

As Customer Zero for these technologies, we have a responsibility for caution—but not at the expense of enablement.

“We’re not the most risk-averse customer,” Johnson says. “We’re simply the most risk-aware customer.”

The AI CoE has four pillars of AI adoption: strategy, architecture, roadmap, and culture. As an issue of AI governance, establishing compliance guardrails falls under architecture. This pillar focuses on the readiness and design of infrastructure and services supporting AI at Microsoft, as well as interoperability and reusability for enterprise assets in the context of generative AI.

Operational pillars of the AI Center of Excellence

We’ve created four pillars to guide our internal implementation of generative AI across Microsoft: Strategy, architecture, roadmap, and culture. Our AI certifications program falls under culture.

Building a secure and compliant data foundation

Fortunately, Microsoft’s existing data hygiene practices provide an excellent baseline for AI governance.

There are three key pieces of internal data hygiene at Microsoft:

  1. Employees can create new workspaces like Sites, Teams, Groups, Communities, and more. Each workspace features accountability mechanisms for its owner, policies, and lifecycle management.
  2. Workspaces and data get delineated based on labeling.
  3. That labeling enforces policies and provides user awareness of how to handle the object in question.

With AI, the primary concern is ensuring that we properly label the enterprise data contained in places like SharePoint sites and OneDrive files. AI will then leverage the label, respect policies, and ensure any downstream content-surfacing will drive user awareness of the item’s sensitivity.

AI will always respect user permissions to content, but that assumes source content isn’t overshared. Several different mechanisms help us limit oversharing within the Microsoft tenant:

  1. Using site labeling where the default is private and controlled.
  2. Ensuring every site with a “confidential” or “highly confidential” label sets the default library label to derive from its container. For example, a highly confidential site will mean all new and changed files will also be highly confidential.
  3. Enabling company sharable links (CSLs) like “Share with People in <name of organization>” on every label other than those marked highly confidential. That means default links will only show up to the direct recipient in search and in results employees get from using Copilots.  
  4. All Teams and sites have lifecycle management in place where the owner attests that the contents are properly labeled and protected. This also removes stale data from AI.
  5. Watching and addressing oversharing based on site and file reports from Microsoft Graph Data Connect.

Microsoft 365 Copilot respects labels and displays them to keep users informed of the sensitivity of the response. It also respects any rights management service (RMS) protections that block content extraction on file labels.

If the steps above are in place, search disablement becomes unnecessary, and overall security improves. “It isn’t just about AI,” Johnson says. “It’s about understanding where your information sits and where it’s flowing.”

From there, Copilot and other AI tools in question can then safely build a composite label and attach it to its results based on the foundational labels it used to create them. That provides the context it needs to decide whether to share its results with a user or extend them to a third-party app.

Johnson, Nasir, Hempey, and Bunge pose for pictures assembled into a collage.
From left to right, David Johnson, Faisal Nasir, Matt Hempey, and Keith Bunge are among those working together here at Microsoft to ensure our data estate stays protected as we adopt next-generation AI tools.

“To make the copilot platform as successful and securely extensible as possible, we need to ensure we can control data egress from the tenant,” says Keith Bunge, a software engineering architect for employee productivity solutions within Microsoft Digital.

We can also use composite labels to trigger confidential information warnings to users. That transparency provides our people with both agency and accountability, further cementing responsible AI use within our culture of trust.

Ultimately, AI governance is similar to guardrails for other tools and features that have come online within our tenant. As an organization, we know the areas we need to review because we already have a robust set of criteria for managing data.

But since this is a new technology with new functionality, the AI CoE is spending time conducting research and partnering with stakeholders across Microsoft to identify potential concerns. As time goes on, we’ll inevitably adjust our AI governance practices to ensure we’re meeting our commitment to responsible AI.

“Process, people, and technology are all part of this effort,” Nasir says. “The framework our team is developing helps us look at data standards from a technical perspective, as well as overall architecture for AI applications as extensions on top of cloud and hybrid application architecture.”

As part of getting generative AI governance right, we’re conducting extensive user experience and accessibility research. That helps us understand how these tools land throughout our enterprise and keep abreast of new scenarios as they emerge—along with the extensibilities they need and any data implications. We’re also investing time and resources to catch and rectify any mislabeled data, ensuring we seal off any existing vulnerabilities within our AI ecosystem.

Not only does this customer zero engagement model support our AI governance work, but it also helps build trust among employees through transparency. That trust is a key component of the employee empowerment that drives adoption.

Realizing generative AI’s potential

As our teams navigate AI governance and drive adoption among employees, it’s important to keep in mind that these guardrails aren’t there to hinder progress. They’re in place to protect and ultimately inspire confidence in new tools.

“In its best form, governance is a way to educate and inform our organization to move forward as quickly as possible,” Hempey says. “We see safeguards as accelerators.”

We know our customers also want to empower their employees with generative AI. As a result, we’re discovering ways to leverage or extend these services in exciting new ways for the organizations using our products.

“As we’re on this journey, we’re learning alongside our industry peers,” Nasir says. “By working through these important questions and challenges, we’re positioned to empower progress for our customers in this space.”

Key Takeaways

Consider these tips as you think about governing the deployment of generative AI at your company:

  • Understand that IT organizations have inherently cautious habits.
  • Leverage what industry leaders like the Responsible AI Initiative are sharing.
  • Recognize that employees will adopt these tools on their own, so it’s best to prepare the way beforehand.
  • Consider your existing data hygiene and how it needs to extend to accommodate AI.
  • Make sure you have an enterprise plan for ensuring labeling and security, because AI tools will provide the most complete access by default.
Try it out

Get started on your own next-generation AI revolution—try Microsoft 365 Copilot today.

The post Getting the most out of generative AI at Microsoft with good governance appeared first on Inside Track Blog.

]]>
12391
Modernizing Microsoft’s internal Help Desk experience with ServiceNow http://approjects.co.za/?big=insidetrack/blog/modernizing-the-support-experience-with-servicenow-and-microsoft/ Fri, 18 Oct 2024 14:00:19 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=8868 Microsoft is transforming the experience of our internal IT helpdesk agents and, using ServiceNow IT Service Management, we’re improving the experience our employees have when they request IT help. We’ve transitioned the traditional and custom IT tools and features in Microsoft service-desk into ServiceNow ITSM. This has led to innovation in many areas of our […]

The post Modernizing Microsoft’s internal Help Desk experience with ServiceNow appeared first on Inside Track Blog.

]]>
Microsoft Digital technical storiesMicrosoft is transforming the experience of our internal IT helpdesk agents and, using ServiceNow IT Service Management, we’re improving the experience our employees have when they request IT help.

We’ve transitioned the traditional and custom IT tools and features in Microsoft service-desk into ServiceNow ITSM. This has led to innovation in many areas of our IT help-desk management, including improving accessibility, incident management, IT workflows and processes, service level agreements (SLAs), use of AI/ML, virtual agents, automation, and knowledge across the IT help-desk organization data visualization, monitoring and reporting.

In short, our strategic partnership with ServiceNow is helping us improve the efficacy of our internal IT help-desk environment and for our mutual customers.

Working together to accelerate digital transformation

Our Microsoft Global Helpdesk team supports more than 170,000 employees and partners in more than 150 countries and regions. We deploy this new ITSM environment at enterprise scale, supporting more than 3,000 incoming user requests each day.

We collaborate with ServiceNow as a partner to accelerate our digital IT transformation and continually increase the effectiveness of our IT service management. Our Global IT Helpdesk recognizes potential improvements, provides feedback to ServiceNow, and tests new features. We receive accelerated responses to our ITSM solution requirements while ServiceNow gets valuable, large-scale feedback mechanism to continuously improve their platform.

[Explore how we’re streamlining vendor assessment with ServiceNow VRM at Microsoft. | Discover how we’re instrumenting ServiceNow with Microsoft Azure Monitor. | Unpack how we’re using Microsoft Teams and ServiceNow to enhance end-user support.]

Modernizing the internal support experience

In the past, when our internal support scale, business processes, or other factors demanded functionality that existing platforms and systems couldn’t support, our engineers would develop tools and applications to supply the required functionality. Many ITSM features at Microsoft were developed in this manner. With ServiceNow now providing the core ITSM functionality we need, we are working together to integrate our tools’ functionality into their platform, which provides a unified IT help-desk experience that is scalable with enhanced productivity and accelerated digital IT transformation.

ServiceNow enables Microsoft to integrate its digital environment with ServiceNow ITSM functionality and Microsoft uses out-of-the-box ServiceNow functionality whenever suitable. ServiceNow adds and improves functionality, often based on Microsoft feedback and development, and then Microsoft uses the resulting improved capabilities to replace internally developed tools and processes. This collaborative relationship on ITSM benefits both organizations and our mutual customers.

The ServiceNow environment accepting inputs for various support modalities into the core ServiceNow features.
Microsoft’s innovative ITSM experience with ServiceNow.

Collaborating to create rapid innovation

In some cases, Microsoft-developed tools are the starting point for new ServiceNow functionality, such as the recent implementation of ServiceNow ITSM Predictive Intelligence.

We initially built an experimental machine learning-based solution in our environment that automatically routed a limited number of helpdesk incidents in ServiceNow by using machine learning and AI. This reduced the amount of manual triage that our support agents had to perform and helped us learn about incident routing with predictive intelligence and identify innovation opportunities.

We then took those learnings and shared them with ServiceNow to help them make improvements to the ServiceNow ITSM Predictive Intelligence out-of-the-box platform tool. By progressing from our experimental solution to ServiceNow ITSM Predictive Intelligence, we benefitted from the out-of-the-box flexibility and scalability we needed to drive adoption of predictive intelligence within our helpdesk services landscape. We’ll use our work with ServiceNow ITSM Predictive Intelligence throughout this case study to highlight the core steps in our journey toward an improved internal support experience.

Establishing practical goals for modernized support

Predictive intelligence is one example among dozens of ITSM modernization efforts that are ongoing between ServiceNow and Microsoft. Other examples include virtual-agent integration, sentiment analysis of user interaction, anomaly detection, troubleshooting workflows, playbooks, and integrated natural-language processing. Enhancing our helpdesk support agent experience by using ServiceNow ITSM involves three key areas of focus: automation, monitoring, and self-help.

Automation

We’re automating processes, including mundane and time-consuming tasks, such as triaging incidents. Automation gives time back to our helpdesk agents and helps them focus on tasks that are best suited to their skill sets. Feature implementation examples include orchestration, virtual agents, and machine learning.

We’re using ServiceNow Playbooks for step-by-step guidance to resolve service incidents. Playbooks allow our agents to follow guided workflows for common support problems. Many playbooks, such as the password-reset process, include automated steps that reduce the likelihood of human error and decrease mean time to resolution (MTTR).

Monitoring

We use monitoring to derive better context and provide proactive responses to ServiceNow activity. Enhanced monitoring capabilities increase service-desk responsiveness and helpdesk agent productivity. Feature implementation examples include trigger-automated proactive remediation, improved knowledge cataloging, and trend identification.

Microsoft Endpoint Manager supplies mobile-device and application management for our end-user devices, and we’ve worked with ServiceNow to connect Endpoint Manager data and functionality into the ITSM environment. This data and functionality supplies device context, alerts, and performance data to ServiceNow, giving device-related details to support agents directly within a ServiceNow incident.

Self-help functionality

Self-service capabilities help our support incident requestors help themselves, by supplying simplified access to resources that guide them toward remediation. It frees up IT helpdesk agents from performing tasks that end users can do and lowers the total cost of ownership, as support-team resources can focus on more impactful initiatives. Feature implementation examples include natural language understanding, context-based interaction, bot-to-bot interactions, and incident deflection.

For example, the ServiceNow Virtual Agent integrates with Microsoft Teams for bot-to-bot interactions. Bot integration and bot-to-bot handoff enable us to continue using the considerable number of bots already in use across the organization, presenting self-help options for our users that best meet their needs. We have also collaborated with ServiceNow to create integration with knowledge and AI capabilities from Microsoft 365 support. Microsoft 365 service-health information, recommended solutions for Microsoft 365 issues, and incident-related data are available in ServiceNow to both end users and agents.

Examining the modern support experience in context

We have a holistic approach to unifying its internal service-desk platform under ServiceNow. The functionality and health of our Global Helpdesk organization drives the experience for our support agents and the people they assist. To Identify opportunities for improvement, we examined all aspects of our support environment, making observations about tool usage, overall experience of support agents, and potential gaps in the toolset that our support agents use. When thinking about new capabilities, such as AI and automation, we needed to understand how our people work. Why and how we perform certain tasks or processes can lose relevance over time, and a deviation from the original way in which we do something can potentially lead to inefficiencies that we must regularly evaluate and address. We placed these observations into the following categories:

  • Comprehensive best practices. We’re encouraging our Global Helpdesk team to be a strategic partner in business, design, and transition of support at Microsoft, rather than simply focusing on tactical ticketing and related metrics. Our internal support experience improvements in ServiceNow ITSM go beyond ticketing processes and require a holistic view of all aspects of the support-agent environment. Additionally, implementing new technologies is only one part of the bigger solution in which it’s critical to verify and keep people accountable for adhering to best practices. We’re transforming our Global Helpdesk operations to provide strategic value and achieve business goals alongside the more tactical elements of service-desk operation, such as incident management and resolution.
  • Interaction management. Examining how our helpdesk agents and the people they support use ServiceNow ITSM and its associated functionality to drive interface improvements. It also helps identify new modalities to connect our support agents to the issues that our users are experiencing. Our goals include increasing virtual-agent usage and reducing use of less efficient interaction modalities, such as fielding IT support requests over the phone.
  • Incident management. Incident management is the core of ServiceNow ITSM functionality and forms the basis for our largest set of considerations and observations. We examine how we create and manage support incidents, triage and distribute them, and then move them toward the final goal of resolution. In all of this, we assess how Global Helpdesk performs incident management and where it can improve. It’s important to understand the use of data to aid incident resolution, and how to better automate and streamline incident processes and consolidate other elements of service-desk functionality into the incident-management workflow. There are many incident-management factors that we evaluate including identifying incident origin, integrating virtual-agent interactions, increasing contextual data in incidents, automating incident routing, deflection and resolution, and improving incident search functionality.
  • Knowledge management. We’re improving how our helpdesk agents and users access knowledge for IT support. Consolidating external knowledge sources into ServiceNow centralizes our knowledge management effort and makes the knowledge they contain available across other service-desk processes, such as incident management. Among the factors we’re focusing on are standardizing knowledge article content, supporting proactive knowledge creation, improving knowledge self-service capabilities, and including related knowledge content for incidents.
  • Governance and platform management. The overall management of the ServiceNow ITSM platform and how it interacts with our environment and integrates into outside data sources and tools helps Microsoft use ServiceNow data to improve other business processes. We’re focusing on improving formal business processes and integrating with other processes and technology while aligning with Microsoft’s broader business strategies and standards.

Creating value within the helpdesk support experience

Microsoft and ServiceNow are intentionally and thoughtfully continuing to improve the ServiceNow environment, both from the organizational perspective here at Microsoft and from the product perspective at ServiceNow. For each feature and business need that we evaluate, we examine the feature from all applicable perspectives. Our general feature evaluation and migration process includes:

  1. Evaluating business needs for applications and features. For each identified feature, we assess the associated business need. This helps us prioritize feature implementation and understand what we could accomplish based on available resources. ServiceNow Predictive Intelligence, our example in this case study, reduced mean time to resolution (MTTR) for incidents and freed up support-agent resources. These factors both positively influenced support agent efficiency and satisfaction. We’d already been using machine learning-based functionality, so the business need was clear.
  2. Determining product roadmaps, organizational goals, and support requirements. In this step, we examine a feature’s practical implementation. Understanding how we need to address a feature or feature gap often depends on product roadmaps and feature development in-flight within ServiceNow. Early access to ServiceNow roadmaps and the ServiceNow Design Partnership Program helps guide our decision making as we determine the evolution of features and how they align with our future vision for digital transformation. If ServiceNow is already developing a specific feature in ITSM space, we don’t worry about integrating or recommending our internally developed tools or functionality. However, we often contribute to the improvement of ServiceNow features based on our internally developed tools, as we did with ServiceNow Predictive Intelligence.
    It can be complex to understand the state of ServiceNow with respect to a specific feature and its requirements. We must examine where we’ve customized ServiceNow ITSM to accommodate an internally developed solution and how we can roll back those changes when we retire the internally developed solution in favor of out-of-the-box functionality.
  3. Identifying risks, benefits, and effects of migration. Establishing required resource allocation and determining necessary skill sets for the migration process is critical to understanding how each feature migration might affect our service-desk environment and overall ServiceNow functionality. Specific factors we consider include licensing requirements and quality control checks, both of which greatly influence the speed and order of feature migration. We also assess the effects of retiring legacy/custom tools on the Global Helpdesk and other Microsoft teams. Many tools we use were widely adopted and instrumental to daily operations, so we must consider training and transition processes on a feature-by-feature basis. In some cases, a feature or tool’s addition or removal could cause a shift in business processes, so it’s critical that we understand the potential impact. We do this by examining feature migration in the context of organizational goals, standards, and best practices.
  4. Obtaining organizational support. One of the most crucial steps is to garner organizational buy-in. Although Microsoft and ServiceNow are strategic partners, it’s critical to get support from key stakeholders here at Microsoft, including our Global Helpdesk and Microsoft Digital process owners. Communication is critical. When we involve all stakeholders, we ensure that we account for all business and technical considerations.
    Rather than getting approval at a high level for the entire ServiceNow support-improvement project, we instead obtain approval for small pilots that focus on fast delivery and high value. This demonstrates the potential for a feature’s broader adoption at the Global Helpdesk. In our predictive-intelligence example, we started by engaging the Global Helpdesk team that was using the experimental machine learning-based incident-routing tool. The existing experimental tool was only routing some incidents, so we proposed a pilot to route the remaining tickets using ServiceNow ITSM Predictive Intelligence. We worked very closely with our internal support team to ensure that the solution met their needs. The pilot demonstrated the tool’s effectiveness in daily operations and proved the tool’s capabilities in production use. This built confidence and trust in the tool and helped drive broader adoption across the organization.
  5. Establishing plans for transition, deallocation, and retirement of legacy tools and systems. We had critical decisions to make about retirement and deallocation of existing tools. Many feature transitions involved identifying how we would move or transform data. Addressing data access and security is a common challenge.
    Additionally, with Predictive Intelligence, our team needs real incidents to train the Predictive Intelligence algorithms. This involves moving production data into a development environment, which has security implications. The feature team must proactively engage our Microsoft security team to provide appropriate information. ServiceNow supplies detailed platform-security documentation, which helps us obtain security-related approval. Also, transition often requires retraining. We must arrange training for users of legacy systems so they can use the new features in ServiceNow and understand how the transition might affect their daily activities and overall service-desk operations.
  6. Engaging in feature implementation. We implemented features following specific plans, processes, and best practices that we established. Implementation scope and effort varies depending on the feature, and in the case of Predictive Intelligence, the Microsoft development team began by creating a pilot. This enables the team to confirm that ServiceNow ITSM Predictive Intelligence can achieve the required level of routing accuracy. It also provided a proof of concept that enabled us to quickly find gaps in functionality.
    Starting with a prototype means we then have a functional example that’s up and running quickly so we get early feedback on the out-of-the-box capabilities. We were able to start fast, iterate, and deliver a better solution more quickly. However, we also had to examine and account for scalability within the ServiceNow platform to ensure that the solution would work well when widely adopted.
    Predictive Intelligence went live with a small number of incident-routing destinations, which helped build the confidence of the service-desk team. We then expanded the number of assignment groups as we received positive feedback. The rollout required minimal organizational change management because Predictive Intelligence was automating an existing process and the service-desk team was already using an experimental AI tool for automated routing.
  7. Measure progress and review results. We measure all aspects of the feature-implementation progress. Identifying and enabling key metrics and reports helps build confidence and trust in each feature’s effectiveness. As we iterate changes and work through a pilot process for any given feature, we keep stakeholders involved and use our results to contribute to the broader digital transformation. It’s also critical for adoption and is an effective way to illustrate benefits and bring other teams onboard.

Integrating ServiceNow ITSM and Microsoft products

In addition to feature enhancement and growth of ServiceNow functionality, Microsoft and ServiceNow are working together to integrate our products with ServiceNow. This enables us to capitalize on their capabilities and make it easier for our customers at Microsoft to integrate ServiceNow into their environment. For example, device-management capability and reporting data from Microsoft Intune, Microsoft’s mobile device management platform, can integrate directly with ServiceNow. This integration improves contextual data within ServiceNow and extends ServiceNow’s capabilities by using Intune and Microsoft Endpoint Manager functionality on managed devices.

Key Takeaways

Our Microsoft Global Helpdesk team has observed significant benefits from the continued ServiceNow ITSM feature implementations, and we’re still working with ServiceNow on an extensive list of features that we want to implement. Some of the best benefits we’ve observed include:

  • Increased business value. We’ve been able to retire custom solutions and the infrastructure that supports them, reducing total cost of ownership and management effort for legacy solutions. Consolidating our service-desk functionality in ServiceNow ITSM makes licensing and maintenance much more simple and more cost-effective.
  • Reduced service-desk management effort. The various automation features we’ve implemented have reduced the effort our IT helpdesk agents exert, particularly with respect to mundane or repetitive tasks. AI and machine-learning capabilities have improved built-in decision making, reduced the potential for human error, and given time back to our helpdesk agents so they can focus on the work that demands their expertise. For example, ServiceNow ITSM Predictive Intelligence is routing incidents with 80 percent accuracy, saving considerable time and effort.
  • Improved helpdesk agent experience. Unifying our tools and features within ServiceNow ITSM enabled us to create a more simple, easier-to-navigate toolset for our support agents. They can move between tasks and tools more effectively, which increases overall support responsiveness and makes our service desk more efficient.
  • Reduced mean time to resolution. We’re experiencing a continual reduction in incident resolution as we integrate features and modernize the agent support experience. For example, ServiceNow ITSM Predictive Intelligence reduced MTTR by more than 10 percent, on average in our pilot project. Based on these numbers, we’re deploying Predictive Intelligence at a broader scale for Global Helpdesk.

While we’ve successfully migrated many internally developed capabilities into out-of-the-box ServiceNow ITSM features and tools, it is an ongoing process and we’re continuing to learn lessons about the migration process and successfully transforming the IT help-desk environment for greater efficiency and a more productive IT-agent experience. Some key lessons we’ve learned and continue to explore include:

  • Start small and expand scope as a feature matures. We typically start feature implementation small with a single team or use-case scenario. We use pilot projects to validate a solution, prove feature completeness, and gather proof of concept to gain support from stakeholders. Each pilot project contributes to a broader improvement to ServiceNow functionality.
  • Get buy-in from stakeholders early. Establishing organizational support is critical to the overall success of every feature implementation. We work hard to understand who our stakeholders are within Microsoft and make them aware of how a feature implementation might affect them—and ultimately improve our organization.
  • Test scalability and establish monitoring early. Starting small results in many quick wins and rapid feature implementation. However, we must ensure that any capabilities we implement can scale to meet enterprise-level requirements, both in functionality and usability. Tracking metrics and maintaining accurate reporting using ServiceNow’s reporting capabilities provides concrete assessment of feature effectiveness as it increases in usage and scale.
  • Don’t accept feature requirements at face value. Specific features are easy to quantify and qualify, but we always consider the bigger picture. We ask what business questions or challenges the requirements are addressing and then ensure our perspective always includes holistic business goals. We don’t simply want a granular implementation of a specific feature.

We’re working on a thorough list of feature integrations that include extensive use of AI and machine learning. This will simplify and strengthen predictive and automation capabilities in ServiceNow. We’re also investigating deeper integration between ServiceNow ITSM and Microsoft products including Microsoft 365, Microsoft Dynamics 365 and Azure.

We are excited that our joint efforts have introduced a rapid iteration of feature capability into the ServiceNow platform and the impact this brings to the ITSM industry.

Related links

The post Modernizing Microsoft’s internal Help Desk experience with ServiceNow appeared first on Inside Track Blog.

]]>
8868
Harnessing first-party patching technology to drive innovation at Microsoft http://approjects.co.za/?big=insidetrack/blog/harnessing-first-party-patching-technology-to-drive-innovation-at-microsoft/ Mon, 16 Sep 2024 15:00:45 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=11209 We live in a world where network security is a foundational concern for large enterprises like ours that are trusted with sensitive customer data. This creates an environment where we all need to ensure that we have high patching compliance across our massive array of devices. This complexity requires that we continuously improve our patching […]

The post Harnessing first-party patching technology to drive innovation at Microsoft appeared first on Inside Track Blog.

]]>
Microsoft Digital storiesWe live in a world where network security is a foundational concern for large enterprises like ours that are trusted with sensitive customer data. This creates an environment where we all need to ensure that we have high patching compliance across our massive array of devices. This complexity requires that we continuously improve our patching tools and solutions.

Layered on top of that, our need for device security exists within a complex matrix of software, hardware, and user interfaces. If our employees are running out-of-date software, they’re leaving their device and our network unsecured and vulnerable.

Every leader understands the extreme importance of keeping their data secure. No enterprise wants to be the next company that gets exposed by one of these hacks that has happened in the past and to lose sensitive business or customer data.

—Biswa Jaysingh, principal product manager, Microsoft Digital Employee Experience

Ruana, Jaysingh, and Damkewala pose for portraits in a montage of three images.
Christine Ruana (left), Biswa Jaysingh (center), and Jamshed Damkewala are among those helping Microsoft transform how it does first-party patching. Ruana is principal program manager for Microsoft Visual Studio responsible for enterprise deployments and updates of Visual Studio, Jaysingh is a principal product manager on our Microsoft Digital Employee Experience team, and Damkewala is a principal PM manager on the Platforms and Languages team responsible for .NET.

This is especially true when developers use powerful first-party tools like Microsoft Visual Studio and developer platforms like .NET to build new software. With developer platforms like .NET, this becomes even more critical because .NET is not just deployed to developer machines, it is also installed on the computers where the developed application will run.

Here at Microsoft Digital Employee Experience, the organization that powers, protects, and transforms the company, we are committed to holistically improving patching compliance rates across the company. To ensure we are improving security at every level of Microsoft’s infrastructure, from software and devices to the networks themselves, we are utilizing new technology and new approaches that we develop internally within our organization and within our product group partners.

“Every leader understands the extreme importance of keeping their data secure,” says Biswa Jaysingh, a principal product manager with Microsoft Digital Employee Experience. “No enterprise wants to be the next company that gets exposed by one of these hacks that has happened in the past and to lose sensitive business or customer data.”

Recent innovations in first-party patching technology at Microsoft, including in Windows Update for Business, Microsoft Endpoint Manager, and Microsoft Defender for Endpoints, are allowing us to unlock unprecedented levels of security across our network while at the same time reducing costs and speeding the timeline of deployment. From consolidating multiple deployments to reducing the impact of reboots on users, our changes are producing efficiencies across the business.

Within the matrix of network security at Microsoft, there are several critical arenas for security admins to monitor, patch, and secure. Malicious actors are looking at the full tech stack for vulnerabilities, which means our teams must monitor, patch, and secure devices at every level from the operating system and first-party software to hardware and third-party software.

[Discover boosting Windows internally at Microsoft with a transformed approach to patching.]

Reacting to the growing threat to first-party software

In the modern cloud-connected world there is more surface area that we need our IT professionals to protect. With more and more devices, from Internet of Things devices to peripherals having internet access, there is much larger potential for bad actors to break in. It’s more important than ever to stay secure, which means update compliance must be as close to 100 percent as possible across all levels of a device.

“The last thing we want is for Microsoft to ship a fix for a vulnerability, but an enterprise isn’t able to adopt the update. That would leave them insecure,” says Christina Ruana, principal program manager for Microsoft Visual Studio who is responsible for enterprise deployments and updates of Visual Studio.

This passion for effectively securing networks led Microsoft leaders like Ruana to ensure they’re doing everything possible to ease the burden of patching on our teams here at Microsoft and for our external customers. “Visual Studio’s recent Administrator update solution makes it much easier for enterprises to deploy updates through Microsoft Endpoint Manager,” Ruana says.

At the start of the .NET journey we were seeing unacceptable compliance rates as developers were using the software in ways that we hadn’t anticipated. This increased the complexity for maintaining patching compliance. We had to create paths for updating both current builds of .NET through Visual Studio and for keeping older builds compliant through Microsoft Update. This has improved compliance rates considerably.

—Jamshed Damkewala, principal PM manager, Platforms and Languages team

We’re using Microsoft Defender for Endpoints to manage the health of our devices, which is helping us improve the security of our network while also improving the user experience for our employees and our admins. Every efficiency gained along the way makes it more likely for compliance rates to grow. Teams are working around the clock to identify and patch vulnerabilities, but this work is only as effective as the compliance rate is strong.

A better experience for admins and users alike

We in the Microsoft Digital Employee Experience organization began our journey to transform the way we do patching by making it easier for our IT admins to deploy patches across our network.

Until recently, the first-party patching regime at Microsoft required a slew of software solutions to be manually managed, including important software applications like Visual Studio and .NET. But in November 2022, we were able to migrate numerous critical patch deployments to Windows Update for Business, dramatically increasing the timeliness and accuracy of device patching.

“At the start of the .NET journey we were seeing unacceptable compliance rates as developers were using the software in ways that we hadn’t anticipated,” says Jamshed Damkewala, principal PM manager on the Platforms and Languages team responsible for .NET. “This increased the complexity for maintaining patching compliance. We had to create paths for updating both current builds of .NET through Visual Studio and for keeping older builds compliant through Microsoft Update. This has improved compliance rates considerably.”

We gain significant efficiencies as we eliminate manual deployments through automation and streamline the rollout of patches through Windows Update and Windows Update for Business. With these universal sources for patches, we simultaneously reduce time for testing while reducing errors in the deployments.

With more accurate updates meeting user devices more quickly and hitting all builds of first-party software that require patching, our networks are more secure than ever. The ease of patches deploying on devices also reduces the impact on users, so they are more likely to remain compliant while experiencing minimal disruption.

These innovations are not custom built for Microsoft. We are effectively leveraging technology that we already had to make it more efficient and effective for teams to patch their software.

—Harshitha Digumarthi, senior product manager responsible, Microsoft Digital Employee Experience

Furthermore, the technology within Microsoft Defender for Endpoints allows for thorough device scanning to provide effective telemetry for admins to react to, giving them better knowledge to engineer future patches and policies for Windows Update for Business, which further grows compliance rates. We use it to scan and report vulnerabilities, which empowers our admins to respond faster. Microsoft Endpoint Manager also allows our admins to better manage Windows Update for Business policies.

Providing the tools for teams to succeed

Internally here at Microsoft, our updated technology allows us to monitor our networks more efficiently, providing detailed telemetry about device health that we’ve never had before. This visibility allows us to develop new protocols for our networks, including complicated cases of end-of-life devices and end-of-service software.

But the true unlock-for-efficiency comes in how these systems were designed, constructed, and automated.

“These innovations are not custom built for Microsoft,” says Harshitha Digumarthi, a senior product manager responsible for improving the patching experience at Microsoft Digital Employee Experience. “We are effectively leveraging technology that we already had to make it more efficient and effective for teams to patch their software.”

This approach reduces cost, increases the speed of development, and fundamentally improves the efficiencies of teams deploying mission-critical patches for their software. Potential errors caused by manual deployment are eliminated and the single update source on a single day per month improves the user experience considerably. The result is a more secure network through increased device compliance.

These benefits are compounded when it comes to first-party software like Visual Studio and .NET. We’ve seen a rise in patching compliance for internal customers developing new solutions with these products, all attributable to improvements in Visual Studio and .NET. As a result, security dividends can exponentially grow through the company and to the ecosystem at large. Our networks, and yours, are more secure thanks to these developments.

Key Takeaways

  • Ensure your software applications are kept up to date to remain secure. Follow this guidance for Visual Studio.
  • By utilizing a common deployment solution in Windows Update for Business and Microsoft Endpoint Manager, efficiency is gained and potential errors from manual updating are mitigated.
  • A single update source on a single day per month dramatically improves the user experience.
  • Innovations in device scanning provides new telemetry, which leads to new solutions for rare-but-important use cases like end-of-life devices and end-of-service software.

Related links

The post Harnessing first-party patching technology to drive innovation at Microsoft appeared first on Inside Track Blog.

]]>
11209
Empowering employees after the call: Enabling and securing Microsoft Teams meeting data retention at Microsoft http://approjects.co.za/?big=insidetrack/blog/empowering-employees-after-the-call-enabling-and-securing-microsoft-teams-meeting-data-retention-at-microsoft/ Sat, 07 Sep 2024 20:06:58 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=12724 Microsoft Teams meetings help our globally distributed and digitally connected employees create meaningful hybrid work experiences. When those meetings are recorded and transcribed or their data becomes available to AI-powered digital assistants, their impact increases. Although these features have proven to be incredibly useful to our employees and our wider organization, there are also concerns […]

The post Empowering employees after the call: Enabling and securing Microsoft Teams meeting data retention at Microsoft appeared first on Inside Track Blog.

]]>
Microsoft Teams meetings help our globally distributed and digitally connected employees create meaningful hybrid work experiences. When those meetings are recorded and transcribed or their data becomes available to AI-powered digital assistants, their impact increases.

Although these features have proven to be incredibly useful to our employees and our wider organization, there are also concerns about how retaining Microsoft Teams meeting data might affect our security posture, records retention policy, and privacy. Just like any other company, we at Microsoft have to balance these varying aspects.

At Microsoft Digital, the company’s IT organization, we’re leading cross-disciplinary conversations to ensure we get it right.

{Learn how Microsoft creates self-service sensitivity labels in Microsoft 365. Discover getting the most out of generative AI at Microsoft with good governance.}

Policy considerations of Microsoft Teams meeting data retention

Our Microsoft Teams meeting data comes in the form of three main artifacts: recordings, transcriptions, and data that AI-powered Microsoft 365 Copilot and recap services can use to increase our general business intelligence.

Microsoft Teams data retention coverage

Meeting recording

  • Cloud video recording
  • Audio
  • Screen-sharing activity

Transcription

  • Transcript
  • Captions

Intelligent recap and Copilot

  • Data generated from recaps, Copilot queries and responses

Our Microsoft Teams meeting data retention efforts focus on three key artifacts: recordings, transcriptions, and the data used by AI-powered tools.

We find meeting recordings and transcripts are helpful for many reasons, including helping us overcome accessibility issues related to fast-paced, real-time meetings or language differences—this is a powerful way to level the playing field for our employees. Our ability to share recordings and transcripts also supports greater knowledge transfer and asynchronous work, which is especially helpful for teams that operate across time zones.

Microsoft Teams Premium enables AI-generated notes, task lists, personalized timeline markers for video recaps, and auto-generated chapters for recordings. Within a meeting, the Microsoft 365 Copilot sidebar experience helps our late-joining employees catch up on what they’ve missed, provides intelligent prompts to review unresolved questions, summarizes key themes, and creates notes or action items.

Heade and Johnson pose for pictures assembled into a collage.
Rachael Heade (left) and David Johnson are part of a collaborative team thinking through how we govern Microsoft Teams data and artifacts.

The helpfulness of these tools is clear, but data-retention obligations introduce challenges that organizations like ours need to consider. First, producing and retaining this kind of data can be complex if it isn’t properly governed. Second, data-rich artifacts like video recordings occupy a lot of space, eating up cloud storage budgets.

“We tend to think of the recordings we make during meetings as an individual’s data, but they actually represent the company’s data,” says Rachael Heade, director of records compliance for Microsoft Corporate, External, and Legal Affairs (CELA). “We want to empower individuals, but we have to remember that retention and volume impacts of these artifacts on the company can be substantial.”

In light of these potential impacts, some organizations simply opt out of enabling Microsoft Teams meeting recordings.

Asking the right questions to assemble the proper guardrails

Our teams in Microsoft Digital and CELA, our legal division, are working to balance the benefits of Microsoft Teams meeting data retention with our compliance obligations to provide empowering experiences for our employees while keeping the company safe.

“Organizations are always concerned about centralized control over the retention and deletion of data artifacts,” Heade says. “You have excited employees who want to use this technology, so how do you set them up so they can use it confidently?”

Like many policy conversations, getting this right starts with our governance team in Microsoft Digital and our internal partners asking the employees from across the company who look after data governance the right questions:

  • When should a meeting be recorded and when should it not?
  • What kind of data gets stored?
  • Who can initiate recording, and who can access it after the meeting?
  • How long should we retain meeting data?
  • Where does the data live while it’s retained?
  • How can we control data capture and retention?
  • What does this mean for eDiscovery management?

These questions help us think about the proper guardrails. Our IT perspective is only one part of the puzzle, so we’re actively consulting with CELA, corporate security, privacy, the Microsoft Teams product group, the company’s data custodians, and our business customers throughout this process.

“As an organization, this is about thinking through your tenant position and getting it to a reasonable state,” says David Johnson, tenant and compliance architect with Microsoft Digital.

Our conversations have brought up distinctions that any organization should consider as they build policy around Microsoft Teams meeting retention:

  • The length of time a meeting’s data remains fresh, relevant, or useful
  • The difference in retention value between operational and informational meetings, for example, weekly touchpoints versus project kick-offs or education sessions
  • The different risks inherent in recordings compared to transcriptions
  • Establishing default policies while allowing variability and flexibility when employees need it
  • Long-term retention for functional artifacts like demos and trainings

From sharing perspectives to crafting policy

Our policies around Microsoft Teams meeting data retention continue to evolve, but we’ve already implemented some highly effective practices, policies, and controls. Every organization’s situation is unique, so it’s important that you speak to your legal professionals to craft your own policies. But our work should give you an idea of what’s possible through out-of-the-box features within Microsoft Teams.

The policies we’ve put in place represent a mix of technical defaults, meeting options, and empowering employees to make informed decisions about usefulness and privacy. They also build on the foundations of our work with sensitivity labeling, which is helping secure data across our tenant.

  • Transcript attribution opt-out gives employees agency and reassures them that we honor their privacy.
  • User notices alert employees when a recording or transcription starts, allowing them the opportunity to opt out, request that the meeting go unrecorded, or leave the call.
  • Nuanced business guidance from CELA through an internal Recording Smart Use Statement document helps employees understand the implications of recording, when not to record, and when not to speak in a recorded call.
  • Recommending that employees “tell and confirm” before recording empowers and supports our people to speak up when they don’t believe the meeting should be recorded or don’t feel comfortable.
  • We didn’t wait for Compliance Recording: Although this choice would require that a user consent to recording before unmuting themselves, we decided that opt-outs and user notices provided sufficient agency to our employees.
  • Meeting labels that limit who can record mean only the organizer or co-organizer can initiate recordings for meetings labeled “highly confidential.”
  • Only meeting organizers can download meeting recordings tokeep the meeting data contained and restrict sharing.
  • The default OneDrive and SharePoint meeting expiration is set to 90 days to ensure we minimize the risk of data leakage or cloud storage bloat.

These policies reflect three core tenets we use to inform our governance efforts: empower, trust, and verify.

“The bottom line is that we rely on our employees to be good stewards of the company,” Johnson says. “But because we’ve got a good governance model in place for Teams and good overall hygiene for our tenant, we’re well set up to deal with the evolution of the product and make these decisions.”

We can’t recommend that any organization follow our blueprint entirely, but asking some of the same questions as we have can help build a foundation. To start, read our blog post on how we create self-service sensitivity labels in Microsoft 365 and explore this Microsoft Learn guide on meeting retention policies in Microsoft Teams.

With a firm grasp of the technology and close collaboration with the right stakeholders, you can guide your own policy decisions and unlock the right set of features for your team.

Key Takeaways

Here are some tips for approaching meeting data retention at your company:

  • Face the fear and get comfortable with being uncomfortable: First, establish your concerns, then work toward optimizing your policy compliance.
  • Consider how to support your company’s compliance obligations while allowing your employee population to take advantage of the product, and let those things live together side-by-side.
  • Connecting with your legal team is essential because they’re the experts on assessing complex compliance questions.
  • Investigate meeting labels and what policies you might want to apply to meetings based on sensitivity and other attributes.

The post Empowering employees after the call: Enabling and securing Microsoft Teams meeting data retention at Microsoft appeared first on Inside Track Blog.

]]>
12724
Verifying device health at Microsoft with Zero Trust http://approjects.co.za/?big=insidetrack/blog/verifying-device-health-at-microsoft-with-zero-trust/ Fri, 06 Sep 2024 13:51:32 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=9002 Here at Microsoft, we’re using our Zero Trust security model to help us transform the way we verify device health across all devices that access company resources. Zero Trust supplies an integrated security philosophy and end-to-end strategy that informs how our company protects its customers, data, employees, and business in an increasingly complex and dynamic […]

The post Verifying device health at Microsoft with Zero Trust appeared first on Inside Track Blog.

]]>
Microsoft Digital technical storiesHere at Microsoft, we’re using our Zero Trust security model to help us transform the way we verify device health across all devices that access company resources. Zero Trust supplies an integrated security philosophy and end-to-end strategy that informs how our company protects its customers, data, employees, and business in an increasingly complex and dynamic digital world.

Verified device health is a core pillar of our Microsoft Digital Zero Trust security model. Because unmanaged devices are an easy entry point for bad actors, ensuring that only healthy devices can access corporate applications and data is vital for enterprise security. As a fundamental part of our Zero Trust implementation, we require all user devices accessing corporate resources to be enrolled in device-management systems.

Verified devices support our broader framework for Zero Trust, alongside the other pillars of verified identity, verified access, and verified services.

Diagram showing the four pillars of Microsoft’s Zero Trust model: verify identity, verify device, verify access, and verify services.
The four pillars of Microsoft’s Zero Trust model.

[Explore verifying identity in a Zero Trust model. | Unpack implementing a Zero Trust security model at Microsoft. | Discover enabling remote work: Our remote infrastructure design and Zero Trust. | Watch our Enabling remote work infrastructure design using Zero Trust video.]

Verifying the device landscape at Microsoft

The device landscape at Microsoft is characterized by a wide variety of devices. We have more than 220,000 employees and additional vendors and partners, most of whom use multiple devices to connect to our corporate network. We have more than 650,000 unique devices enrolled in our device-management platforms, including devices running Windows, iOS, Android, and macOS. Our employees need to work from anywhere, including customer sites, cafes, and home offices. The transient nature of employee mobility poses challenges to data safety. To combat this, we are implementing device-management functionality to enable the mobile-employee experience—confirming identity and access while ensuring that the devices that access our corporate resources are in a verified healthy state according to the policies that govern safe access to Microsoft data.

Enforcing client device health

Device management is mandatory for any device accessing our corporate data. The Microsoft Endpoint Manager platform enables us to enroll devices, bring them to a managed state, monitor the devices’ health, and enforce compliance against a set of health policies before granting access to any corporate resources. Our device health policies verify all significant aspects of device state, including encryption, antimalware, minimum OS version, hardware configuration, and more. Microsoft Endpoint Manager also supports internet-based device enrollment, which is a requirement for the internet-first network focus in the Zero Trust model.

We’re using Microsoft Endpoint Manager to enforce health compliance across the various health signals and across multiple client device operating systems. Validating client device health is not a onetime process. Our policy-verification processes confirm device health each time a device tries to access corporate resources, much in the same way that we confirm the other pillars, including identity, access, and services. We’re using modern endpoint protection configuration on every managed device, including preboot and postboot protection and cross-platform coverage. Our modern management environment includes several critical components:

  • Microsoft Azure Active Directory (Azure AD) for core identity and access functionality in Microsoft Intune and the other cloud-based components of our modern management model, including Microsoft Office 365, Microsoft Dynamics 365, and many other Microsoft cloud offerings.
  • Microsoft Intune for policy-based configuration management, application control, and conditional-access management.
  • Clearly defined mobile device management (MDM) policy. Policy-based configuration is the primary method for ensuring that devices have the appropriate settings to help keep the enterprise secure and enable productivity-enhancement features.
  • Windows Update for Business is configured as the default for operating system and application updates for our modern-managed devices.
  • Microsoft Defender for Endpoint (MDE) is configured to protect our devices, send compliance data to Azure AD Conditional Access, and supply event data to our security teams.
  • Dynamic device and user targeting for MDM enables us to supply a more flexible and resilient environment for the application of MDM policies. It enables us to flexibly apply policies to devices as they move into different policy scopes.

Providing secure access methods for unmanaged devices

While our primary goal is to have users connect to company resources by using managed devices, we also realize that not every user’s circumstances allow for using a completely managed device. We’re using cloud-based desktop virtualization to provide virtual machine–based access to corporate data through a remote connection experience that enables our employees to connect to the data that they need from anywhere, using any device. Desktop virtualization enables us to supply a preconfigured, compliant operating system and application environment in a pre-deployed virtual machine that can be provisioned on demand.

Additionally, we’ve created a browser-based experience allowing access, with limited functionality, to some Microsoft 365 applications. For example, an employee can open Microsoft Outlook in their browser and read and reply to emails, but they will not be able to open any documents or browse any Microsoft websites without first enrolling their devices into management.

Key Takeaways

How we treat the devices that our employees and partners use to access corporate data is an integral component of our Zero Trust model. By verifying device health, we extend the enforcement capabilities of Zero Trust. A verified device, associated with a verified identity, has become the core checkpoint across our Zero Trust model. We’re currently working toward achieving better control over administrative permissions on client devices and a more seamless device enrollment and management process for every device, including Linux–based operating systems. As we continue to strengthen our processes for verifying device health, we’re strengthening our entire Zero Trust model.

Related links

The post Verifying device health at Microsoft with Zero Trust appeared first on Inside Track Blog.

]]>
9002
Providing employees with virtual loaner devices with Windows 365 http://approjects.co.za/?big=insidetrack/blog/providing-employees-with-virtual-loaner-devices-with-windows-365/ Thu, 05 Sep 2024 15:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=16349 Watch as Dave Rodriguez interviews Trent Berghofer about using the Windows 365 Cloud PC platform to provide our employees with virtual loaner PCs when they need a backup machine to keep working. Rodriguez is a principal product manager on the Frictionless Devices team in Microsoft Digital, the company’s IT organization. He talks with Berghofer about […]

The post Providing employees with virtual loaner devices with Windows 365 appeared first on Inside Track Blog.

]]>

Watch as Dave Rodriguez interviews Trent Berghofer about using the Windows 365 Cloud PC platform to provide our employees with virtual loaner PCs when they need a backup machine to keep working.

Rodriguez is a principal product manager on the Frictionless Devices team in Microsoft Digital, the company’s IT organization. He talks with Berghofer about using the Windows 365 Cloud PC platform to provide employees with a low-touch, personalized, secure Windows experience hosted on Microsoft Azure.

“With Windows 365 Cloud PC, we’ve been able to accelerate our digital first support model for hybrid employees and deemphasize our reliance on walk up, in-person support at the on-site service locations,” says Berghofer, general manager of Field IT Management and leader of the Support team in Microsoft Digital.

Issuing Cloud PCs to our employees allows them to return to productivity on a machine they already own or have on their person because we don’t have to send them physical back up machines. This allows them to get back to productivity faster and reduces our costs.

Watch this video to see Trent Berghofer (left) and Dave Rodriguez (right) discuss how we’re using Windows 365 to provide our employees with virtual loaner PCs when they need backup machines to keep working.

The post Providing employees with virtual loaner devices with Windows 365 appeared first on Inside Track Blog.

]]>
16349
Finding and fixing network outages in minutes—not hours—with real-time telemetry at Microsoft http://approjects.co.za/?big=insidetrack/blog/finding-and-fixing-network-outages-in-minutes-not-hours-with-real-time-telemetry-at-microsoft/ Thu, 29 Aug 2024 15:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=16333 With more than 600 physical worksites around the world, Microsoft has one of the largest network infrastructure footprints on the planet. Managing the thousands of devices that keep those locations connected demands constant attention from a global team of network engineers. It’s their job to monitor and maintain those devices. And when outages occur, they […]

The post Finding and fixing network outages in minutes—not hours—with real-time telemetry at Microsoft appeared first on Inside Track Blog.

]]>

With more than 600 physical worksites around the world, Microsoft has one of the largest network infrastructure footprints on the planet.

Managing the thousands of devices that keep those locations connected demands constant attention from a global team of network engineers. It’s their job to monitor and maintain those devices. And when outages occur, they lead the charge to repair and remediate the situation.

To support their work, our Real Time Telemetry team at Microsoft Digital, the company’s IT organization, has introduced new capabilities that help engineers identify network device outages and capture data faster and more extensively than ever before. Through real-time telemetry, network engineers can isolate and remediate issues in minutes—not hours—to keep their colleagues productive and our technology running smoothly.

Immediacy is everything

Dave, Sinha, Vijay, and Menten pose for pictures that have been assembled into a collage.
Aayush Dave, Astha Sinha, Abhijit Vijay, Daniel Menten, and Martin O’Flaherty (not pictured) are part of the Microsoft Digital Real Time Telemetry team enabling more up-to-date and extensive network device data.

Conventional network monitoring uses the Simple Network Management Protocol (SNMP) architecture, which retrieves network telemetry through periodic, pull-based polls and other legacy technologies. At Microsoft, that polling interval typically ranges between five minutes and six hours.

SNMP is a foundational telemetry architecture with decades of legacy. It’s ubiquitous, but it doesn’t allow for the most up-to-date data possible.

“The biggest pain point we’ve always heard from network engineers is latency in the data,” says Astha Sinha, senior product manager for the Infrastructure and Engineering Services team in Microsoft Digital. “When data is stale, engineers can’t react quickly to outages, and that has implications for security and productivity.”

Serious vulnerabilities and liabilities arise when a network device outage occurs. But because of lags between polling intervals, a network engineer might not receive information or alerts about the situation until long after it happens.

We assembled the Real Time Telemetry team as part of our Infrastructure and Engineering Services to close that gap.

“We build the tools and automations that network engineers use to better manage their networks,” says Martin O’Flaherty, principal product manager for the Infrastructure and Engineering Services team in Microsoft Digital. “To do that, we need to make sure they have the right signals as early and as consistently as possible.”

The technology that powers these possibilities is known as streaming telemetry. It relies on network devices compatible with the more modern gRPC Network Management Interface (gNMI) telemetry protocol and other technologies to support a push-based approach to network monitoring where network devices stream data constantly.

This architecture isn’t new, but our team is scaling and programmatizing how that data becomes available by creating a real-time telemetry apparatus that collects, stores, and delivers network information to service engineers. These capabilities offer several benefits.

The advantages of real-time network device telemetry

Superior anomaly detection, reduced intent and configuration drift, the foundation for large-scale automation and less network downtime.

Better detection of breaches, vulnerabilities, and bugs through automated scans of OS stalls, lateral device hijacking, malware, and other common vulnerabilities.

Visibility into real-time utilization data on network device stats, as well as steady replacement of current data collection technology and more scalable network growth and evolution.

More rapid network fixes, leading to a reduction in the baselines for time-to-detection and time-to-migration for incidents.

“Devices are proactively sending data without having to wait for requests, so they function more efficiently and facilitate timely troubleshooting and optimization,” says Abhijit Vijay, principal software engineering manager with the Infrastructure and Engineering Services team in Microsoft Digital. “Since this approach pushes data continuously rather than at specific intervals, it also reduces the additional network traffic and scales better in larger, more complex environments.

At any given time, Microsoft operates 25,000 to 30,000 network devices, managed by engineers working across 10 different service lines. Accounting for all their needs while keeping data collection manageable and efficient requires extensive collaboration and prioritization.

We also had to account for compatibility. With so many network devices in operation, replacement lifecycles vary. Not all of them are currently gNMI-compatible.

Working with our service lines, we identified the use cases that would provide the best possible ROI, largely based on where we would find the greatest benefits for security and where networks offered a meaningful number of gNMI-compatible devices. We also zeroed in on the types of data that would be the most broadly useful. Being selective helped us preserve resources and avoid overwhelming engineers with too much data.

We built our internal solution entirely using Azure components, including Azure Functions and Azure Kubernetes Service (AKS), Azure Cosmos DB, Redis, and Azure Data Lake. The result is a platform that network engineers can use to access real-time telemetry data.

With key service lines, use cases, and a base of technology in place, we worked with network engineers to onboard the relevant devices. From there, their service lines were free to experiment with our solution on real-world incidents.

Better response times, greater network reliability

Service lines are already experiencing big wins.

In one case, a heating and cooling system went offline for a building in the company’s Millennium Campus in Redmond, Washington. A lack of environmental management has the potential to cause structural damage to buildings if left unchecked, so it was important to resolve this issue as quickly as possible. The service line for wired onsite connections sprang into action as soon as they received a network support ticket.

With real-time telemetry enabled, the team created a Kusto query to compare DOT1X access-session data for the day of the outage with a period before the outage started. Almost immediately, they spotted problematic VLAN switching, including the exact time and duration of the outage. By correlating the timestamps, they determined that the RADIUS registrations of the device owner had expired, which caused the devices to switch into the guest network as part of the zero-trust network implementation.

As a result, the team was able to resolve the registration issues and restore the heating and cooling systems in 10 minutes—a process that might have taken hours using other collection methods due to the lag-time between polling intervals.

“This has the potential to improve alerting, reduce outages, and enhance security,” says Daniel Menten, senior cloud network engineer for site infrastructure management on the Site Wired team. “One of the benefits of real-time telemetry is that it lets us capture information that wasn’t previously available—or that we received too slowly to take action.”

It’s about speeding up how we identify issues and how we then respond to them.  

“With this level of observability, engineers that monitor issues and outages benefit from enhanced experiences,” says Aayush Dave, a product manager on the Infrastructure and Engineering Services team in Microsoft Digital. “And that’s going to make our network more reliable and performant in a world where security issues and outages can have a global impact.”

The future is in real time

Now that real-time telemetry has demonstrated its value, our efforts are focused on broadening and deepening the experience.

“More devices mean more impact,” Dave says. “By increasing the number of network devices that facilitate real-time telemetry, we’re giving our engineers the tools to accelerate their response to these incidents and outages, all leading to enhanced performance and a more robust network reliability posture.”

It’s also about layering on new ways of accessing and using the data.

We’ve just released a preview UI that provides a quick look at essential data, as well as an all-up view of devices in an engineer’s service line. This dashboard will enable a self-service model that makes it even easier to isolate essential telemetry without the need for engineers to create or integrate their own interfaces.

That kind of observability isn’t only about outages. It also enables optimization by helping engineers understand and influence how devices work together.

The depth and quality of real-time telemetry data also provides a wealth of information for training AI models. With enough data spread across enough devices, predictive analysis might be able to provide preemptive alerts when the kinds of network signals that tend to accompany outages appear.

“We’re paving the way for an AIOps future where the system won’t just predict potential issues, but initiate self-healing actions,” says Rob Beneson, partner director of software engineering on the Infrastructure and Engineering Services team in Microsoft Digital.

It’s work that aligns with our company mission.

“This transformation is enhancing our internal user experience and maintaining the network connectivity that’s critical for our ultimate goal,” Beneson says. “We want to empower every person and organization on the planet to achieve more.”

Key Takeaways

Here are some tips for getting started with real-time telemetry at your company:

  • Start with your users. Ask them about pain points, what scares them, and what they need.
  • Start small and go step by step to get the core architecture in place, then work up to the glossier UI and UX elements.
  • Be mindful of onboarding challenges like bugs in vendor hardware and software, especially around security controls.
  • You’ll find plenty of edge cases and code fails, so be prepared to invest in revisiting challenges and fixing problems that arise.
  • Make sure you have a use case and a problem to solve. Have a plan to guide your adoption and use before you turn on real-time telemetry.
  • Make sure you have the proper data infrastructure in place and an apparatus for storing your data.
  • Communicate and demonstrate the value of this solution to the teams who need to invest resources into onboarding it.
  • Prioritize visibility into the devices and data you’ve onboarded through pilots and hero scenarios, then scale onboarding further according to your teams’ needs.
  • Integrate as much as possible. Consider visualizations and pushing into existing network graphs and tools to surface data where engineers already work.

The post Finding and fixing network outages in minutes—not hours—with real-time telemetry at Microsoft appeared first on Inside Track Blog.

]]>
16333
Hardware-backed Windows 11 empowers Microsoft with secure-by-default baseline http://approjects.co.za/?big=insidetrack/blog/hardware-backed-windows-11-empowers-microsoft-with-secure-by-default-baseline/ Wed, 28 Aug 2024 15:00:12 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=11692 Windows 11 makes secure-by-default viable thanks to a combination of modern hardware and software. This ready out-of-the-box protection enables us to create a new baseline internally across Microsoft, one that level sets our enterprise to be more secure for a hybrid workplace. “We’ve made significant strides to create chip-to-cloud Zero Trust out of the box,” […]

The post Hardware-backed Windows 11 empowers Microsoft with secure-by-default baseline appeared first on Inside Track Blog.

]]>
Microsoft Digital storiesWindows 11 makes secure-by-default viable thanks to a combination of modern hardware and software. This ready out-of-the-box protection enables us to create a new baseline internally across Microsoft, one that level sets our enterprise to be more secure for a hybrid workplace.

“We’ve made significant strides to create chip-to-cloud Zero Trust out of the box,” says David Weston, vice president of Enterprise and OS Security at Microsoft. “Windows 11 is redesigned for hybrid work and security with built-in hardware-based isolation, proven encryption, and our strongest protection against malware.”

This new baseline for protection is one of several reasons Microsoft upgraded to Windows 11.

In addition to a better user experience and improved productivity for hybrid work, the new hardware-backed security features create the foundation for new protections. This empowers us to not only protect our enterprise but also our customers.

[Discover how Microsoft uses Zero Trust to protect our users. Learn how new security features for Windows 11 help protect hybrid work. Find out about Windows 11 security by design from chip to the cloud. Get more information about how Secured-core devices protect against firmware attacks.]

How Windows 11 advanced our security journey

Weston smiles in a portrait photo.
Upgrading to Windows 11 gives you more out-of-the-box security options for protecting your company, says David Weston, vice president of Enterprise and OS Security at Microsoft.

Security has always been the top priority here at Microsoft.

We process an average of 65 trillion signals per day, with 2.5 billion of them being endpoint queries, including more than 1,200 password attacks blocked per second. We can analyze these threats to get better at guarding our perimeter, but we can also put new protections in place to reduce the risk posed by persistent attacks.

In 2019, we announced Secured-core PCs designed to utilize firmware protections for Windows users. Enabled by Trusted Platform Module (TPM) 2.0 chips, Secured-core PCs protect encryption keys, user credentials, and other sensitive data behind a hardware barrier. This prevents bad actors and malware from accessing or altering user data and goes a long way in addressing the volume of security events we experience.

“Our data shows that these devices are more resilient to malware than PCs that don’t meet the Secured-core specifications,” Weston says. “TPM 2.0 is a critical building block for protecting user identities and data. For many enterprises, including Microsoft, TPM facilitates Zero Trust security by measuring the health of a device using hardware that is resilient to tampering common with software-only solutions.”

We’ve long used Zero Trust—always verify explicitly, offer least-privilege access, and assume breach—to keep our users and environment safe. Rather than behaving as though everything behind the corporate firewall is secure, Zero Trust reinforces a motto of “never trust, always verify.”

The additional layer of protection offered by TPM 2.0 makes it easier for us to strengthen Zero Trust. That’s why hardware plays a big part in Windows 11 security features. The hardware-backed features of Windows 11 create additional interference against malware, ransomware, and more sophisticated hardware-based attacks.

At a high level, Windows 11 enforced sets of functionalities that we needed anyway. It drove the environment to demonstrate that we were more secure by default. Now we can enforce security features in the Windows 11 pipeline to give users additional protections.

—Carmichael Patton, principal program manager, Digital Security and Resilience

Windows 11 is the alignment of hardware and software to elevate security capabilities. By enforcing a hardware requirement, we can now do more than ever to keep our users, products, and customers safe.

Setting a new baseline at Microsoft

Patton smiles in a portrait photo.
Windows 11 reduces how many policies you need to set up for your security protections to kick in, says Carmichael Patton, a principal program manager with Microsoft Digital Security and Resilience.

While some security features were previously available via configuration, TPM 2.0 allows Windows 11 to protect users immediately, without IT admins or security professionals having to set specific policies.

“At a high level, Windows 11 enforced sets of functionalities that we needed anyway,” says Carmichael Patton, a principal program manager with Digital Security and Resilience, the organization responsible for protecting Microsoft and our products. “It drove the environment to demonstrate that we were more secure by default. Now we can enforce security features in the Windows 11 pipeline to give users additional protections.”

Thus, getting Windows 11 out to our users was a top priority.

Over the course of five weeks, we were able to deploy Windows 11 across 90 percent of eligible devices at Microsoft. Proving to be the least disruptive release to date, this effort assured our users would be immediately covered by baseline protections for a hybrid world.

We can now look across our enterprise and know that users running Windows 11 have a consistent level of protection in place.

The real impact of secure-by-default

Moving from configurable to built-in protection means that Windows 11 becomes the foundation for secure systems as you move up the stack.

It simplifies everything for everyone, including IT admins who may not also be security experts. You can change configurations and optimize Windows 11 protections based on your needs or rely on default security settings. Secure-by-default extends the same flexibility to users, allowing them to safely choose their own applications while still maintaining tight security.

—David Weston, vice president, Enterprise and OS Security

Applications, identity, and the cloud are able to build off the hardware root-of-trust that Windows 11 derives from TPM 2.0. Application security measures like Smart App Control and passwordless sign-in from Windows Hello for Business are all enabled due to hardware-backed protections in the operating system.

Secure-by-default does all of this without removing the important flexibility that has always been part of Windows.

“It simplifies everything for everyone, including IT admins who may not also be security experts,” Weston says. “You can change configurations and optimize Windows 11 protections based on your needs or rely on default security settings. Secure-by-default extends the same flexibility to users, allowing them to safely choose their own applications while still maintaining tight security.”

Key Takeaways
Going forward, IT admins working in Windows 11 no longer need to put extra effort in enabling and testing security features for performance compatibility. Windows 11 makes it easier for us to gain security value without extra work.

This is important when you consider productivity, one of the other drivers for Windows 11. We need to empower our users to stay productive wherever they are. These new security components go hand-in-hand with our productivity requirements. Our users stay safe without seeing any decline in quality, performance, or experience.

“With Windows 11, the focus is on productivity and thinking about security from the ground up,” Patton says. “We know we can do these amazing things, especially with security being front and center.”

Now that Windows 11 is deployed across Microsoft, we can take advantage of TPM 2.0 to bring even greater protections to our users, customers, and products. We’ve already seen this with the Windows 11 2022 update.

For example, Windows Defender App Control (WDAC) enables us to prevent scripting attacks while protecting users from running untrusted applications associated with malware. Other updates include improvements to IT policy and compliance through config lock: a feature that monitors and prevents configuration drift from occurring when users with local admin rights change settings.

These are the kinds of protections made possible with Windows 11.

“Future releases of Windows 11 will continue to add significant security updates that add even more protection from the chip to the cloud by combining modern hardware and software,” Weston says. “Windows 11 is a better way for everyone to collaborate, share, and present, all with the confidence of hardware-backed protections.”

Try it out

Related links

The post Hardware-backed Windows 11 empowers Microsoft with secure-by-default baseline appeared first on Inside Track Blog.

]]>
11692
Reimagining content management at Microsoft with SharePoint Premium http://approjects.co.za/?big=insidetrack/blog/reimagining-content-management-at-microsoft-with-sharepoint-premium/ Thu, 15 Aug 2024 16:10:38 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=16193 At Microsoft, we’ve rolled out SharePoint Premium across the company, including in Microsoft Digital, the company’s IT organization where we’re using it to transform how the company manages its content. SharePoint is the backbone of our content management and collaboration strategy. We use it to enable our employees to access, share, and co-create documents across […]

The post Reimagining content management at Microsoft with SharePoint Premium appeared first on Inside Track Blog.

]]>
Microsoft Digital technical stories

At Microsoft, we’ve rolled out SharePoint Premium across the company, including in Microsoft Digital, the company’s IT organization where we’re using it to transform how the company manages its content.

SharePoint is the backbone of our content management and collaboration strategy. We use it to enable our employees to access, share, and co-create documents across teams and devices for more than 600,000 sites containing 350 million pieces of content and more than 12 petabytes of data. It’s at the core of everything we do, from being the place where individual employees and small teams store and share their work, to being home to our very largest portals, where the entire company comes together to find news and perform important common tasks.

At this scale, we continually face the challenge of ensuring that our content stored in SharePoint is secure, compliant, and easy to find and use.

It’s a big task, according to Stan Liu, senior product manager and knowledge management lead at Microsoft Digital.

Liu and Peer appear in a composite image.
Stan Liu (left to right), Ray Peer, and Sean Squires (not pictured) are part of a team that’s deploying SharePoint Premium to create a new culture of content management at Microsoft.

“We have a complex environment,” Liu says. “With more than 300,000 users accessing the Microsoft 365 tenant across multiple global regions, a significant amount of content is being created and stored within our SharePoint environment.”

Liu is no stranger to the challenges of managing SharePoint at scale.

“We have several teams creating content and many trying to find content,” he says. “Discoverability is always at the front of our minds and making content easy to find requires time and effort in SharePoint.”

Liu’s team is focused on making content management as simple and effective as possible for Microsoft employees. SharePoint users at Microsoft Digital perform many manual tasks to keep SharePoint content secure, compliant, and easy to find and use. They apply their efforts to provide better governance over constantly increasing digital content, prevent accidental sharing, and effectively manage the content lifecycle.

At this scale, with the challenges of discoverability and manual effort clearly in focus, Liu’s team has turned to SharePoint Premium to meet these challenges and prepare Microsoft Digital for the next generation of content management and usage scenarios.

Discovering, automating, and more with SharePoint Premium

SharePoint Premium uses the power of Microsoft Azure Cognitive Services and the Microsoft Power Platform to bring AI, automation, and added security to content experiences, processing, and governance to SharePoint. It delivers new ways to engage with our most critical content, managing and protecting it through its lifecycle.

AI is at the root of the SharePoint Premium feature set, enhancing productivity and collaboration. AI-driven search provides personalized and relevant search results by understanding user intent and context. AI-powered insights help users discover patterns and trends in their data, enabling more informed decision-making. AI-automated workflows and content management streamline processes, while AI-infused advanced security measures ensure data protection.

SharePoint Premium includes a large set of services, including:

  • Autofill columns. Autofill columns use large language models to automatically pull, condense, or create content from files in a SharePoint document library. This feature allows selected columns to store metadata without manual input, simplifying file management and data organization.
  • Content assembly. Content assembly automates the creation of routine business documents, including contracts, statements of work, service agreements, consent letters, and other types of correspondence.
  • Document processing. Using prebuilt, structured, unstructured, and freeform document processing models, SharePoint Premium can extract information from many document types, such as contracts, invoices, and receipts. It can also detect and extract sensitive information from documents.
  • Image tagging. Image tagging helps users find and manage images in SharePoint document libraries. The image-tagging service automatically tags images with descriptive keywords using AI. These keywords are stored in a managed metadata column, making it easier to search, sort, filter, and manage the images.
  • Taxonomy tagging. Taxonomy tagging helps users find and manage terms in SharePoint document libraries. SharePoint Premium uses AI to automatically tag documents with terms or term sets configured in the taxonomy store. These terms and sets are stored in a managed metadata column, making documents easier to search, sort, filter, and manage.
  • Document translation. SharePoint Premium can create a translated copy of a document or video transcript in a SharePoint document library while preserving the file’s original format and structure.
  • SharePoint eSignature. SharePoint eSignature facilitates the sending of electronic signature requests, ensuring documents remain within Microsoft 365 during the review and signing process. eSignature can efficiently and securely dispatch documents to be signed by individuals within or outside the organization.
  • Optical character recognition. The optical character recognition (OCR) service extracts printed or handwritten text from images. SharePoint Premium automatically scans the image files, extracts the relevant text, and makes the text from the images available for search and indexing. This enables quick and accurate location of key phrases and terms.

“SharePoint Premium is really built around discovery and automation, with a huge emphasis on AI to help perform tasks efficiently at scale,” says Sean Squires, a principal product manager in the OneDrive and SharePoint Product Group. “We need that granular control and understanding of how our content and intellectual property is represented, shared, and used.”

Creating a culture of content management

There’s also a cultural element that’s critical to the team’s work.

“SharePoint Premium represents a shift in how Microsoft Digital approaches content management, not just as a new technology but as a new way of working,” Liu says. “It’s about integrating AI capabilities into daily practices to automate mundane tasks like tagging content, making it more discoverable, and keeping it up to date. This integration aims to make content management a part of daily habits and routines, ensuring content remains relevant and useful.”

Liu highlights the importance of making content management a daily habit and how AI can simplify the process. He recognizes the need for a cultural shift to incentivize active participation in content management. It’s also important to measure the impact of content contributions on others. The goal is to make content management processes, such as classifying content, a regular practice to ensure high-quality content within the enterprise.

Part of the cultural shift is in how we think about SharePoint itself. Moving from “site-centric” to “document-centric” usage of SharePoint signifies a strategic shift in how we manage SharePoint content at Microsoft Digital. Metadata and content context are critical to ensuring our content is easy to find and relevant, and we’re leaning on SharePoint Premium features to help us do that. Incentivizing active participation in content management and making it a daily habit for our employees is critical to a wider and more consistent realization of the benefits provided by SharePoint Premium across the organization.

“How do we find ways to make things easier without somebody having to do anything?” asks Ray Peer, a senior product manager in Microsoft Digital. “That’s where we’re using the SharePoint Premium AI capabilities to help with things like automatic processing and auto-tagging. These are mundane tasks that people don’t like to do. So instead of just forcing change on the culture, we’re finding ways to make it easier for the culture to change.”

Microsoft Digital has already seen huge successes in making it easier for the culture to change with SharePoint Premium.

The Microsoft Cloud Operations & Innovation Finance team experienced several issues in accurately tracking and managing their invoices. In certain situations, the team found it difficult to find unpaid invoices or uncover missing information in invoices. These issues made it more difficult to keep track of payments and created delays in locating invoices.

To address these issues, they created a SharePoint site dedicated to invoice management for the finance team. It used the prebuilt SharePoint Premium document processing models to automatically extract important data from invoices uploaded to the document library, including PO numbers, dates, amounts, and client information. They added column metadata to track payment status and applied conditional formatting and highlighting to categorize invoices and draw attention to missing information in invoice fields.

It’s a perfect example of how an AI-driven feature like document processing in SharePoint Premium can radically transform a business process within a simple SharePoint document library. The solution reduced costs, decreased processing times, improved accuracy, and enabled better compliance for the Microsoft Cloud Operations & Innovation Finance team.

Peer reiterates that solutions like this have a way of gaining momentum in the organization.

“This solution quickly came to the attention of other finance-based departments within Microsoft,” Peer says. “Other managers wanted the same benefits and asked for the same solution. It was easy to replicate, and suddenly, those benefits were multiplied across the company.”

It’s not an isolated situation. Many other business groups have similar stories.

The Microsoft Partner Incentive Operations team sends hundreds of letters to Microsoft partners daily using a set of Microsoft Word templates. IT staff created the templates manually and updated them manually when necessary. On average, it took 75 minutes to create a template and 30 minutes to review each letter and send it to a partner organization.

To improve efficiency, they implemented a new letter generation process for partner letters based on the SharePoint Premium Content Assembly service. They created a SharePoint modern template document for the letter types they used and integrated the templates with data sourced from internal systems containing relevant information customized for each partner, by market, region and sales offer type.

The new solution created a flexible method for creating partner letters with dynamic placeholders in the document and multiple letter formats, including text, tables, and conditional sections, all driven by a self-serve UI. Letter creators could completely automate the letter creation process without any manual intervention.

The new solution created more consistent partner letter results, and the automated process saved the team more than 6,000 hours per year in manual template creation and refresh tasks, leading to a 30% increase in business agility and a decrease in time-to-market.

Integrating Microsoft 365 Copilot with AI

Microsoft 365 Copilot integrates seamlessly with SharePoint Premium to enhance its capabilities, particularly in automation and AI. The content AI and intelligent document processing built into SharePoint Premium use advanced machine learning models to classify content, organize it, extract relevant information, and automate workflows at scale. The improvements in metadata and content quality directly improve the performance and results in Copilot.

Copilot complements SharePoint Premium by using large language models to assist with document creation, Q&A, and running complex queries. It can help find specific documents based on criteria and automate tasks like translations or routing documents to appropriate teams. The integration aims to democratize the ability to configure complex machine learning models, making it easier for users to apply them to their content and achieve significant productivity gains.

The symbiotic relationship between Copilot and SharePoint Premium is particularly evident in their shared goal of automating content processing. For example, SharePoint Premium can automatically tag documents with metadata, which Copilot can then use to perform more robust queries and assist with organizing content. This collaboration represents a step towards a future where sophisticated AI-driven workflows are accessible to all users, enhancing productivity and efficiency across the organization.

It’s a vision that’s already becoming a reality at Microsoft Digital.

Looking forward

We’re anticipating a near future where AI-based content management capabilities and automation fully intersect with large language models and language understanding services to create a sophisticated combination of intelligence and automation.

“We can easily envision the capability to perform a set of complex tasks over complex content with a single prompt,” Squires says. “I might ask Microsoft 365 Copilot to find all invoices for the Fabrikam company worth more than $10,000 from 2023 and send copies of those invoices to my finance manager. SharePoint Premium is putting that future within reach at Microsoft Digital, and that’s exciting.”

Microsoft Digital will continue to invest in SharePoint Premium capabilities across the organization and work with the product group as Customer Zero, growing SharePoint Premium features to push the boundaries of what’s capable with AI-powered content management.

Key Takeaways

Here are a few takeaways that can help you get started with SharePoint Premium in your organization:

  • Explore the different Content AI services that SharePoint Premium offers, such as autofill columns, content assembly, document processing, image tagging, taxonomy tagging, document translation, eSignature, and optical character recognition.
  • Identify the business processes and scenarios in your organization that could benefit from AI-driven content management and automation, such as invoice tracking, partner or customer correspondence, document creation, and content discovery.
  • Learn how to configure and use SharePoint Premium features in your SharePoint document libraries, such as creating and applying metadata columns, setting up content assembly templates, enabling document processing models, and using image and taxonomy tagging.
  • Integrate Microsoft 365 Copilot with SharePoint Premium to enhance your content experiences and workflows, such as querying for specific documents, translating content, routing documents to appropriate teams, and creating documents with natural language prompts.

The post Reimagining content management at Microsoft with SharePoint Premium appeared first on Inside Track Blog.

]]>
16193
Implementing a Zero Trust security model at Microsoft http://approjects.co.za/?big=insidetrack/blog/implementing-a-zero-trust-security-model-at-microsoft/ Tue, 23 Jul 2024 08:01:02 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=9344 At Microsoft, our shift to a Zero Trust security model more than five years ago has helped us navigate many challenges. The increasing prevalence of cloud-based services, mobile computing, internet of things (IoT), and bring your own device (BYOD) in the workforce have changed the technology landscape for the modern enterprise. Security architectures that rely […]

The post Implementing a Zero Trust security model at Microsoft appeared first on Inside Track Blog.

]]>
Microsoft Digital technical storiesAt Microsoft, our shift to a Zero Trust security model more than five years ago has helped us navigate many challenges.

The increasing prevalence of cloud-based services, mobile computing, internet of things (IoT), and bring your own device (BYOD) in the workforce have changed the technology landscape for the modern enterprise. Security architectures that rely on network firewalls and virtual private networks (VPNs) to isolate and restrict access to corporate technology resources and services are no longer sufficient for a workforce that regularly requires access to applications and resources that exist beyond traditional corporate network boundaries. The shift to the internet as the network of choice and the continuously evolving threats led us to adopt a Zero Trust security model internally here at Microsoft. Though our journey began many years ago, we expect that it will continue to evolve for years to come.

[Learn how we’re transitioning to modern access architecture with Zero Trust. Find out how to enable a remote workforce by embracing Zero Trust security. Running on VPN: Learn how we’re keeping our remote workforce connected.]
For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=ZVLlEj2So4E, select the “More actions” button (three dots icon) below the video, and then select “Show transcript.”

Carmichael Patton, a security architect at Microsoft, shares the work that his team, Digital Security and Resiliency, has been doing to support a Zero Trust security model.

The Zero Trust model

Based on the principle of verified trust—in order to trust, you must first verify—Zero Trust eliminates the inherent trust that is assumed inside the traditional corporate network. Zero Trust architecture reduces risk across all environments by establishing strong identity verification, validating device compliance prior to granting access, and ensuring least privilege access to only explicitly authorized resources.

Zero Trust requires that every transaction between systems (user identity, device, network, and applications) be validated and proven trustworthy before the transaction can occur. In an ideal Zero Trust environment, the following behaviors are required:

  • Identities are validated and secure with multifactor authentication (MFA) everywhere. Using multifactor authentication eliminates password expirations and eventually will eliminate passwords. The added use of biometrics ensures strong authentication for user-backed identities.
  • Devices are managed and validated as healthy. Device health validation is required. All device types and operating systems must meet a required minimum health state as a condition of access to any Microsoft resource.
  • Telemetry is pervasive. Pervasive data and telemetry are used to understand the current security state, identify gaps in coverage, validate the impact of new controls, and correlate data across all applications and services in the environment. Robust and standardized auditing, monitoring, and telemetry capabilities are core requirements across users, devices, applications, services, and access patterns.
  • Least privilege access is enforced. Limit access to only the applications, services, and infrastructure required to perform the job function. Access solutions that provide broad access to networks without segmentation or are scoped to specific resources, such as broad access VPN, must be eliminated.

Zero Trust scenarios

We have identified four core scenarios at Microsoft to help achieve Zero Trust. These scenarios satisfy the requirements for strong identity, enrollment in device management and device-health validation, alternative access for unmanaged devices, and validation of application health. The core scenarios are described here:

  • Scenario 1: Applications and services have the mechanisms to validate multifactor authentication and device health.
  • Scenario 2: Employees can enroll devices into a modern management system which guarantees the health of the device to control access to company resources.
  • Scenario 3:  Employees and business guests have a method to access corporate resources when not using a managed device.
  • Scenario 4: Access to resources is limited to the minimum required—least privilege access—to perform a specified function.

Zero Trust scope and phases

We’re taking a structured approach toward Zero Trust, in an effort that spans many technologies and organizations, and requires investments that will carry over multiple years. The figure below represents a high-level view of the Zero Trust goals that we aim to fully achieve over the next two to three years, grouped into our core Zero Trust pillars. We will continually evaluate these goals and adjust them if necessary. While these goals don’t represent the full scope of the Zero Trust efforts and work streams, they capture the most significant areas of Zero Trust effort at Microsoft.

 

Pre-Zero Trust characteristics compared to the four pillars of Zero Trust implementation: Verify identity, Verify device, Verify access, and Verify services.
The major goals for each Zero Trust pillar.

Scope

Our initial scope for implementing Zero Trust focused on common corporate services used across our enterprise—our employees, partners, and vendors. Our Zero Trust implementation targeted the core set of applications that Microsoft employees use daily (e.g., Microsoft Office apps, line-of-business apps) on platforms like iOS, Android, MacOS, and Windows (Linux is an eventual goal). As we have progressed, our focus has expanded to include all applications used across Microsoft. Any corporate-owned or personal device that accesses company resources must be managed through our device management systems.

Verify identity

To begin enhancing security for the environment, we implemented MFA using smart cards to control administrative access to servers. We later expanded the multifactor authentication requirement to include all users accessing resources from outside the corporate network. The massive increase in mobile devices connecting to corporate resources pushed us to evolve our multifactor authentication system from physical smart cards to a phone-based challenge (phone-factor) and later into a more modern experience using the Microsoft Azure Authenticator application.

The most recent progress in this area is the widespread deployment of Windows Hello for Business for biometric authentication. While Windows Hello hasn’t completely eliminated passwords in our environment, it has significantly reduced password usage and enabled us to remove our password-expiration policy. Additionally, multifactor authentication validation is required for all accounts, including guest accounts, when accessing Microsoft resources.

Verify device

Our first step toward device verification was enrolling devices into a device-management system. We have since completed the rollout of device management for Windows, Mac, iOS, and Android. Many of our high-traffic applications and services, such as Microsoft 365 and VPN, enforce device health for user access. Additionally, we’ve started using device management to enable proper device health validation, a foundational component that allows us to set and enforce health policies for devices accessing Microsoft resources. We’re using Windows Autopilot for device provisioning, which ensures that all new Windows devices delivered to employees are already enrolled in our modern device management system.

Devices accessing the corporate wireless network must also be enrolled in the device-management system. This includes both Microsoft–owned devices and personal BYOD devices. If employees want to use their personal devices to access Microsoft resources, the devices must be enrolled and adhere to the same device-health policies that govern corporate-owned devices. For devices where enrollment in device management isn’t an option, we’ve created a secure access model called Microsoft Azure Virtual Desktop. Virtual Desktop creates a session with a virtual machine that meets the device-management requirements. This allows individuals using unmanaged devices to securely access select Microsoft resources. Additionally, we’ve created a browser-based experience allowing access to some Microsoft 365 applications with limited functionality.

There is still work remaining within the verify device pillar. We’re in the process of enabling device management for Linux devices and expanding the number of applications enforcing device management to eventually include all applications and services. We’re also expanding the number of resources available when connecting through the Virtual Desktop service. Finally, we’re expanding device-health policies to be more robust and enabling validation across all applications and services.

Verify access

In the verify access pillar, our focus is on segmenting users and devices across purpose-built networks, migrating all Microsoft employees to use the internet as the default network, and automatically routing users and devices to appropriate network segments. We’ve made significant progress in our network-segmentation efforts. We have successfully deployed several network segments, both for users and devices, including the creation of a new internet-default wireless network across all Microsoft buildings. All users have received policy updates to their systems, thus making this internet-based network their new default.

As part of the new wireless network rollout, we also deployed a device-registration portal. This portal allows users to self-identify, register, or modify devices to ensure that the devices connect to the appropriate network segment. Through this portal, users can register guest devices, user devices, and IoT devices.

We’re also creating specialized segments, including purpose-built segments for the various IoT devices and scenarios used throughout the organization. We have nearly completed the migration of our highest-priority IoT devices in Microsoft offices into the appropriate segments.

We still have a lot of work to do within the verify access pillar. We’re following the investments in our wireless networks with similar wired network investments. For IoT, we need to complete the migration of the remaining high-priority devices in Microsoft offices and then start on high-priority devices in our datacenters. After these devices are migrated, we’ll start migrating lower-priority devices. Finally, we’re building auto-detection for devices and users, which will route them to the appropriate segment without requiring registration in the device-registration portal.

Verify services

In the verify services pillar, our efforts center on enabling conditional access across all applications and services. To achieve full conditional access validation, a key effort requires modernizing legacy applications or implementing solutions for applications and services that can’t natively support conditional access systems. This has the added benefit of eliminating the dependency on VPN and the corporate network. We’ve enabled auto-VPN for all users, which automatically routes users through the appropriate connection. Our goal is to eliminate the need for VPN and create a seamless experience for accessing corporate resources from the internet. With auto-VPN, the user’s system will transparently determine how to connect to resources, bypassing VPN for resources available directly from the internet or using VPN when connecting to a resource that is only available on the corporate network.

Amid the COVID-19 pandemic, a large percentage of our user population transitioned to work from home. This shift has provided increased use of remote network connectivity. In this environment, we’ve successfully identified and engaged application owners to initiate plans to make these applications or services accessible over the internet without VPN.

While we have taken the first steps toward modernizing legacy applications and services that still use VPN, we are in the process of establishing clear plans and timelines for enabling access from the internet. We also plan to invest in extending the portfolio of applications and services enforcing conditional access beyond Microsoft 365 and VPN.

Zero Trust architecture with Microsoft services

The graphic below provides a simplified reference architecture for our approach to implementing Zero Trust. The primary components of this process are Intune for device management and device security policy configuration, Microsoft Azure Active Directory (Azure AD) conditional access for device health validation, and Azure AD for user and device inventory.

The system works with Intune, by pushing device configuration requirements to the managed devices. The device then generates a statement of health, which is stored in Microsoft Azure AD. When the device user requests access to a resource, the device health state is verified as part of the authentication exchange with Azure AD.

 

Users and devices in an unprivileged network.
Microsoft’s internal Zero Trust architecture.

A transition that’s paying off

Our transition to a Zero Trust model has made significant progress. Over the last several years, we’ve increased identity-authentication strength with expanded coverage of strong authentication and a transition to biometrics-based authentication by using Windows Hello for Business. We’ve deployed device management and device-health validation capabilities across all major platforms and will soon add Linux. We’ve also launched a Windows Virtual Desktop system that provides secure access to company resources from unmanaged devices.

As we continue our progress, we’re making ongoing investments in Zero Trust. We’re expanding health-validation capabilities across devices and applications, increasing the Virtual Desktop features to cover more use cases, and implementing better controls on our wired network. We’re also completing our IoT migrations and segmentation and modernizing or retiring legacy applications to enable us to deprecate VPN.

Each enterprise that adopts Zero Trust will need to determine what approach best suits their unique environment. This includes balancing risk profiles with access methods, defining the scope for the implementation of Zero Trust in their environments, and determining what specific verifications they want to require for users to gain access to their company resources. In all of this, encouraging the organization-wide embrace of Zero Trust is critical to success, no matter where you decide to begin your transition.

Key Takeaways

  • Collect telemetry and evaluate risks, and then set goals.​
  • Get to modern identity and MFA—then onboard to AAD.​
  • For conditional access enforcement, focus on top used applications to ensure maximum coverage.​
  • Start with simple policies for device health enforcement such as device lock or password complexity. ​
  • Run pilots and ringed rollouts. Slow and steady wins the race. ​
  • Migrate your users to the Internet and monitor VPN traffic to understand internal dependencies.​
  • Focus on user experience as it is critical to employee productivity and morale. Without adoption, your program will not be a success.​
  • Communication is key—bring your employees on the journey with you! ​
  • Assign performance indicators and goals for all workstreams and elements, including employee sentiment. ​

Related links

We'd like to hear from you!

Share your feedback with us—take our survey and let us know what kind of content is most useful to you.

The post Implementing a Zero Trust security model at Microsoft appeared first on Inside Track Blog.

]]>
9344
Boosting employee device procurement at Microsoft with better forecasting http://approjects.co.za/?big=insidetrack/blog/boosting-employee-device-procurement-at-microsoft-with-better-forecasting/ Fri, 28 Jun 2024 15:16:15 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=9836 Device forecasting at Microsoft has allowed the company to plan for new hires, replace out-of-warranty devices for existing employees, and respond to major events, like the release of Windows 11. As a result, we’ve been able to strategically acquire equipment in a more efficient way. It all started with a shift to remote work. “New […]

The post Boosting employee device procurement at Microsoft with better forecasting appeared first on Inside Track Blog.

]]>
Microsoft Digital storiesDevice forecasting at Microsoft has allowed the company to plan for new hires, replace out-of-warranty devices for existing employees, and respond to major events, like the release of Windows 11. As a result, we’ve been able to strategically acquire equipment in a more efficient way.

It all started with a shift to remote work.

“New employees will always need a device on day one,” says Pandurang Kamath Savagur, a senior program manager with Microsoft Digital, the organization that powers, protects, and transforms the company. “But for the first time ever, we were also in an experience where people had to stay productive from home with only a single device. They couldn’t easily get into the offices for a secondary or loaner device.”

To anticipate demand and offset delays, Microsoft Digital built a platform where administrators across the company could project the number of devices they’d need. Simultaneously, the group took a deep dive look at the current device population to forecast the number of employees who would need a device refresh—all in time for the deployment of Windows 11.

[Discover how Microsoft quickly upgraded to Windows 11. Find out how Microsoft is reinventing the employee experience for a hybrid world. Learn more about verifying devices in a Zero Trust model.]

Getting better at predicting the future

Historically, Microsoft didn’t need to build up a large inventory of devices for employees; everything was made to order.

Business groups own the budget, so they know what the next six months will look like for their team. Microsoft onboards approximately 3,000 employees each month, and every employee needs to select and set up a device. We can’t just buy 3,000 devices a month—we need to know specifications about how it will be used.

—Pandurang Kamath Savagur, senior program manager, Microsoft Digital

It worked a little bit like this:

Procurement, having already certified devices and negotiated pricing and SLAs suitable for employees, enables administrators or direct employees to obtain a new employee device through our internal ProcureWeb tool. The tool places a purchase order directly to the OEM—the third-party manufacturer of the device—or a reseller who would then manufacture and ship the equipment out to the user.

But the shift in how people worked meant we’d need to be more proactive in procuring devices for employees. And to get there, we’d need a better picture of fluctuating demand.

“Business groups own the budget, so they know what the next six months will look like for their team,” Savagur says. “Microsoft onboards approximately 3,000 employees each month, and every employee needs to select and set up a device. We can’t just buy 3,000 devices a month—we need to know specifications about how it will be used.”

Everything from storage space, computing power, memory, and keyboard language to the number of units would need to be collected from business groups. Once that information came in, Procurement could work with OEMs to have machines ready and available to be delivered to administrators well in advance.

This new approach to device forecasting has streamlined the way Microsoft acquires devices, giving us adequate stock to ensure a good experience. We can now anticipate device purchases for new hires while also accounting for break fixes.

And the timing of this effort couldn’t have been better—Windows 11 was on the way, and we would need this new approach along with additional analysis to get the new operating system into the hands of employees.

Empowering Microsoft with Windows 11

Released in late 2021, Windows 11 gives us the enterprise-grade security that Microsoft requires. To achieve this secure-by-default state, we needed to replace older devices with equipment that met the Windows 11 hardware requirements.

But instead of issuing new devices to everyone at launch—something that would be both costly and logistically impossible—we took a strategic approach, using a combination of telemetry and machine learning to identify and prioritize devices for replacement.

Cheng and Sawant smile in portrait photos that have been brought together in a photo collage.
Anqi Cheng and Neeti Sawant teamed up to transform the way the company handles its internal device forecasting. Cheng is a data scientist with the W+D Data team, and Sawant is a data engineer with Microsoft Digital.

“We have telemetry data, application usage, and warranty information, and that gives us a base to forecast from in Power BI,” says Neeti Sawant, a data engineer with Microsoft Digital who helped create a device forecasting dashboard as part of this effort. “It told us what we needed to monitor and forecast, which devices are aging out, and when they would be eligible for a refresh.”

But we weren’t just relying on warranty data alone.

Using Microsoft Azure Cosmos DB and Microsoft Azure DataBricks for machine learning, we are able to leverage the historical data for device population and apply survival modeling techniques, predicting how many ineligible primary devices would be active over the next few years towards the Windows 10 end of support.

Device forecasting has allowed us to work closely with OEMs so that devices are available on time and so that we’re not selecting on availability, but rather meeting all the performance, compliance, and security needs of our users. Satisfaction scores from employees have increased by 20 points since we started doing this.

—Pandurang Kamath Savagur, senior program manager, Microsoft Digital

“Not all users will replace their device at the end of warranty,” says Anqi Cheng, a data scientist with the W+D Data team at Microsoft. “Although many devices will naturally age out over time, many users hang on to their devices for an extended time. When combined with other device forecasting data, we had a holistic view of the landscape.”

This level of analysis ensured Microsoft would be able to quickly develop a roadmap for getting employees on Windows 11.

A bright forecast for Microsoft

Employees at Microsoft can—and should—expect to have a device that engages, protects, and empowers them. Device forecasting makes this possible.

“Device forecasting has allowed us to work closely with OEMs so that devices are not selected on availability, but rather meeting all the performance, compliance, and security needs of our users,” Savagur says. This effort has resulted in a better experience for employees. “Satisfaction scores from employees have increased by 20 points since we started doing this.”

Access to device forecasting information has also been helpful to admins and Finance, who now have a better idea as to which devices will need to be refreshed for Windows 11. Moving into the future, these same projections will make it easier for Procurement to put the right device into an employee’s hands.

“With the analysis provided to us by Microsoft Digital, we can now understand how many primary devices are in our environment and when we expect them to refresh,” says Colby McNorton, a senior program manager on the Microsoft Procurement team. “As we look forward, instead of the purchasing journey being reactive, we can proactively reach out to users and tell them that their device is at the end of its life and even recommend a device based on what we know about usage.”

Thanks to Windows Autopilot, new devices are automatically pre-configured with Windows 11. Windows Autopilot deploys an OEM-optimized version of the Windows client, so you don’t have to maintain custom images and drivers for every device model. This makes new devices business-ready faster, empowering employees to stay engaged and protected. Users can just switch on, sign in, and all policies and apps will be in place within a day.

 

Key Takeaways

  • Be sure to get visibility into your device population. Find out what kinds of devices are on your network, where they’re located, who owns them, and what stage they’re at in their lifecycle. This gives you a lot of agility in a changing environment. You can do this using Microsoft Intune.
  • Windows 10 and Windows 11 can be co-managed side by side using the same tools and processes, which makes it possible for Microsoft and other companies to be methodical about replacing devices.
  • Spend time with team admins who understand user needs. This allows you to cultivate a short list of devices that are best suited for your employees and gives procurement clear priorities.

Related links

We'd like to hear from you!
Want more information? Email us and include a link to this story and we’ll get back to you.

Please share your feedback with us—take our survey and let us know what kind of content is most useful to you.

The post Boosting employee device procurement at Microsoft with better forecasting appeared first on Inside Track Blog.

]]>
9836
Improving security by protecting elevated-privilege accounts at Microsoft http://approjects.co.za/?big=insidetrack/blog/improving-security-by-protecting-elevated-privilege-accounts-at-microsoft/ Fri, 21 Jun 2024 12:50:21 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=9774 [Editor’s note: This content was written to highlight a particular event or moment in time. Although that moment has passed, we’re republishing it here so you can see what our thinking and experience was like at the time.] An ever-evolving digital landscape is forcing organizations to adapt and expand to stay ahead of innovative and […]

The post Improving security by protecting elevated-privilege accounts at Microsoft appeared first on Inside Track Blog.

]]>
Microsoft Digital technical stories[Editor’s note: This content was written to highlight a particular event or moment in time. Although that moment has passed, we’re republishing it here so you can see what our thinking and experience was like at the time.]

An ever-evolving digital landscape is forcing organizations to adapt and expand to stay ahead of innovative and complex security risks. Increasingly sophisticated and targeted threats, including phishing campaigns and malware attacks, attempt to harvest credentials or exploit hardware vulnerabilities that allow movement to other parts of the network, where they can do more damage or gain access to unprotected information.

We on the Microsoft Digital Employee Experience (MDEE) team, like many IT organizations, used to employ a traditional IT approach to securing the enterprise. We now know that effective security calls for a defense-in-depth approach that requires us to look at the whole environment—and everyone that accesses it—to implement policies and standards that better address risks.

To dramatically limit our attack surface and protect our assets, we developed and implemented our own defense-in-depth approach. This includes new company standards, telemetry, monitoring, tools, and processes to protect administrators and other elevated-privilege accounts.

In an environment where there are too many administrators, or elevated-privilege accounts, there is an increased risk of compromise. When elevated access is persistent or elevated-privilege accounts use the same credentials to access multiple resources, a compromised account can become a major breach.

This blog post highlights the steps we are taking at Microsoft to protect our environment and administrators, including new programs, tools, and considerations, and the challenges we faced. We will provide some details about the new “Protect the Administrators” program that is positively impacting the Microsoft ecosystem. This program takes security to the next level across the entire enterprise, ultimately changing our digital-landscape security approach.

[Learn how we’re protecting high-risk environments with secure admin workstations. Read about implementing a Zero Trust security model at Microsoft. Learn more about how we manage Privileged Access Workstations.]

Understanding defense-in-depth protection

Securing all environments within your organization is a great first step in protecting your company. But there’s no silver-bullet solution that will magically counter all threats. At Microsoft, information protection rests on a defense-in-depth approach built on device health, identity management, and data and telemetry—a concept illustrated by the three-legged security stool, in the graphic below. Getting security right is a balancing act. For a security solution to be effective, it must address all three aspects of risk mitigation on a base of risk management and assurance—or the stool topples over and information protection is at risk.

Information protection depicted as a stool with three legs that represent device health, identity management, and data and telemetry.
The three-legged-stool approach to information protection.

Risk-based approach

Though we would like to be able to fix everything at once, that simply isn’t feasible. We created a risk-based approach to help us prioritize every major initiative. We used a holistic strategy that evaluated all environments, administrative roles, and access points to help us define our most critical roles and resources within the Microsoft ecosystem. Once defined, we could identify the key initiatives that would help protect the areas that represent the highest levels of risk.

As illustrated in the graphic below, the access-level roles that pose a higher risk should have fewer accounts—helping reduce the impact to the organization and control entry.

The next sections focus primarily on protecting elevated user accounts and the “Protect the Administrators” program. We’ll also discuss key security initiatives that are relevant to other engineering organizations across Microsoft.

Illustration of the risk-role pyramid we use to help prioritize security initiatives.
The risk-role pyramid.

Implementing the Protect the Administrators program

After doing a deeper analysis of our environments, roles, and access points, we developed a multifaceted approach to protecting our administrators and other elevated-privilege accounts. Key solutions include:

  • Working to ensure that our standards and processes are current, and that the enterprise is compliant with them.
  • Creating a targeted reduction campaign to scale down the number of individuals with elevated-privilege accounts.
  • Auditing elevated-privilege accounts and role management to help ensure that only employees who need elevated access retain elevated-access privileges.
  • Creating a High Value Asset (HVA)—an isolated, high-risk environment—to host a secure infrastructure and help reduce the attack surface.
  • Providing secure devices to administrators. Secure admin workstations (SAWs) provide a “secure keyboard” in a locked-down environment that helps curb credential-theft and credential-reuse scenarios.
  • Reporting metrics and data that help us share our story with corporate leadership as well as getting buy-in from administrators and other users who have elevated-privilege accounts across the company.

Defining your corporate landscape

In the past, equipment was primarily on-premises, and it was assumed to be easier to keep development, test, and production environments separate, secure, and well-isolated without a lot of crossover. Users often had access to more than one of these environments but used a persistent identity—a unique combination of username and password—to log into all three. After all, it’s easier to remember login information for a persistent identity than it is to create separate identities for each environment. But because we had strict network boundaries, this persistent identity wasn’t a source of concern.

Today, that’s not the case. The advent of the cloud has dissolved the classic network edge. The use of on-premises datacenters, cloud datacenters, and hybrid solutions are common in nearly every company. Using one persistent identity across all environments can increase the attack surface exposed to adversaries. If compromised, it can yield access to all company environments. That’s what makes identity today’s true new perimeter.

At Microsoft, we reviewed our ecosystem to analyze whether we could keep production and non-production environments separate. We used our Red Team/penetration (PEN) testers to help us validate our holistic approach to security, and they provided great guidance on how to further establish a secure ecosystem.

The graphic below illustrates the Microsoft ecosystem, past and present. We have three major types of environments in our ecosystem today: our Microsoft and Office 365 tenants, Microsoft Azure subscriptions, and on-premises datacenters. We now treat them all like a production environment with no division between production and non-production (development and test) environments.

Microsoft ecosystem then and now. Three environment types now: Microsoft/Office 365 tenants, Azure subscriptions, on-premises datacenters.
Now, everything is considered a “production” environment. We treat our three major environments in the Microsoft ecosystem like production.

Refining roles to reduce attack surfaces

Prior to embarking on the “Protect the Administrators” program, we felt it was necessary to evaluate every role with elevated privileges to determine their level of access and capability within our landscape. Part of the process was to identify tooling that would also protect company security (identity, security, device, and non-persistent access).

Our goal was to provide administrators the means to perform their necessary duties in support of the technical operations of Microsoft with the necessary security tooling, processes, and access capabilities—but with the lowest level of access possible.

The top security threats that every organization faces stem from too many employees having too much persistent access. Every organization’s goal should be to dramatically limit their attack surface and reduce the amount of “traversing” (lateral movement across resources) a breach will allow, should a credential be compromised. This is done by limiting elevated-privilege accounts to employees whose roles require access and by ensuring that the access granted is commensurate with each role. This is known as “least-privileged access.” The first step in reaching this goal is understanding and redefining the roles in your company that require elevated privileges.

Defining roles

We started with basic definitions. An information-worker account does not allow elevated privileges, is connected to the corporate network, and has access to productivity tools that let the user do things like log into SharePoint, use applications like Microsoft Excel and Word, read and send email, and browse the web.

We defined an administrator as a person who is responsible for the development, build, configuration, maintenance, support, and reliable operations of applications, networks, systems, and/or environments (cloud or on-premises datacenters). In general terms, an administrator account is one of the elevated-privilege accounts that has more access than an information worker’s account.

Using role-based controls to establish elevated-privilege roles

We used a role-based access control (RBAC) model to establish which specific elevated-privilege roles were needed to perform the duties required within each line-of-business application in support of Microsoft operations. From there, we deduced a minimum number of accounts needed for each RBAC role and started the process of eliminating the excess accounts. Using the RBAC model, we went back and identified a variety of roles requiring elevated privileges in each environment.

For the Microsoft Azure environments, we used RBAC, built on Microsoft Azure Resource Manager, to manage who has access to Azure resources and to define what they can do with those resources and what areas they have access to. Using RBAC, you can segregate duties within your team and grant to users only the amount of access that they need to perform their jobs. Instead of giving everybody unrestricted permissions in our Azure subscription or resources, we allow only certain actions at a particular scope.

Performing role attestation

We explored role attestation for administrators who moved laterally within the company to make sure their elevated privileges didn’t move with them into the new roles. Limited checks and balances were in place to ensure that the right privileges were applied or removed when someone’s role changed. We fixed this immediately through a quarterly attestation process that required the individual, the manager, and the role owner to approve continued access to the role.

Implementing least-privileged access

We identified those roles that absolutely required elevated access, but not all elevated-privilege accounts are created equal. Limiting the attack surface visible to potential aggressors depends not only on reducing the number of elevated-privilege accounts. It also relies on only providing elevated-privilege accounts with the least-privileged access needed to get their respective jobs done.

For example, consider the idea of crown jewels kept in the royal family’s castle. There are many roles within the operations of the castle, such as the king, the queen, the cook, the cleaning staff, and the royal guard. Not everyone can or should have access everywhere. The king and queen hold the only keys to the crown jewels. The cook needs access only to the kitchen, the larder, and the dining room. The cleaning staff needs limited access everywhere, but only to clean, and the royal guard needs access to areas where the king and queen are. No one other than the king and queen, however, needs access to the crown jewels. This system of restricted access provides two benefits:

  • Only those who absolutely require access to a castle area have keys, and only to perform their assigned jobs, nothing more. If the cook tries to access the crown jewels, security alarms notify the royal guard, along with the king and queen.
  • Only two people, the king and queen, have access to the crown jewels. Should anything happen to the crown jewels, a targeted evaluation of those two people takes place and doesn’t require involvement of the cook, the cleaning staff, or the royal guard because they don’t have access.

This is the concept of least-privileged access: We only allow you access to a specific role to perform a specific activity within a specific amount of time from a secure device while logged in from a secure identity.

Creating a secure high-risk environment

We can’t truly secure our devices without having a highly secure datacenter to build and house our infrastructure. We used HVA to implement a multitiered and highly secure high-risk environment (HRE) for isolated hosting. We treated our HRE as a private cloud that lives inside a secure datacenter and is isolated from dependencies on external systems, teams, and services. Our secure tools and services are built within the HRE.

Traditional corporate networks were typically walled only at the external perimeters. Once an attacker gained access, it was easier for a breach to move across systems and environments. Production servers often reside on the same segments or on the same levels of access as clients, so you inherently gain access to servers and systems. If you start building some of your systems but you’re still dependent on older tools and services that run in your production environment, it’s hard to break those dependencies. Each one increases your risk of compromise.

It’s important to remember that security awareness requires ongoing hygiene. New tools, resources, portals, and functionality are constantly coming online or being updated. For example, certain web browsers sometimes release updates weekly. We must continually review and approve the new releases, and then repackage and deploy the replacement to approved locations. Many companies don’t have a thorough application-review process, which increases their attack surface due to poor hygiene (for example, multiple versions, third-party and malware-infested application challenges, unrestricted URL access, and lack of awareness).

The initial challenge we faced was discovering all the applications and tools that administrators were using so we could review, certify, package, and sign them as approved applications for use in the HRE and on SAWs. We also needed to implement a thorough application-review process, specific to the applications in the HRE.

Our HRE was built as a trust-nothing environment. It’s isolated from other less-secure systems within the company and can only be accessed from a SAW—making it harder for adversaries to move laterally through the network looking for the weakest link. We use a combination of automation, identity isolation, and traditional firewall isolation techniques to maintain boundaries between servers, services, and the customers who use them. Admin identities are distinct from standard corporate identities and subject to more restrictive credential- and lifecycle-management practices. Admin access is scoped according to the principle of least privilege, with separate admin identities for each service. This isolation limits the scope that any one account could compromise. Additionally, every setting and configuration in the HRE must be explicitly reviewed and defined. The HRE provides a highly secure foundation that allows us to build protected solutions, services, and systems for our administrators.

Secure devices

Secure admin workstations (SAWs) are limited-use client machines that substantially reduce the risk of compromise. They are an important part of our layered, defense-in-depth approach to security. A SAW doesn’t grant rights to any actual resources—it provides a “secure keyboard” in which an administrator can connect to a secure server, which itself connects to the HRE.

A SAW is an administrative-and-productivity-device-in-one, designed and built by Microsoft for one of our most critical resources—our administrators. Each administrator has a single device, a SAW, where they have a hosted virtual machine (VM) to perform their administrative duties and a corporate VM for productivity work like email, Microsoft Office products, and web browsing.

When working, administrators must keep secure devices with them, but they are responsible for them at all times. This requirement mandated that the secure device be portable. As a result, we developed a laptop that’s a securely controlled and provisioned workstation. It’s designed for managing valuable production systems and performing daily activities like email, document editing, and development work. The administrative partition in the SAW curbs credential-theft and credential-reuse scenarios by locking down the environment. The productivity partition is a VM with access like any other corporate device.

The SAW host is a restricted environment:

  • It allows only signed or approved applications to run.
  • The user doesn’t have local administrative privileges on the device.
  • By design, the user can browse only a restricted set of web destinations.
  • All automatic updates from external parties and third-party add-ons or plug-ins are disabled.

Again, the SAW controls are only as good as the environment that holds them, which means that the SAW isn’t possible without the HRE. Maintaining adherence to SAW and HRE controls requires an ongoing operational investment, similar to any Infrastructure as a Service (IaaS). Our engineers code-review and code-sign all applications, scripts, tools, and any other software that operates or runs on top of the SAW. The administrator user has no ability to download new scripts, coding modules, or software outside of a formal software distribution system. Anything added to the SAW gets reviewed before it’s allowed on the device.

As we onboard an internal team onto SAW, we work with them to ensure that their services and endpoints are accessible using a SAW device. We also help them integrate their processes with SAW services.

Provisioning the administrator

Once a team has adopted the new company standard of requiring administrators to use a SAW, we deploy the Microsoft Azure-based Conditional Access (CA) policy. As part of CA policy enforcement, administrators can’t use their elevated privileges without a SAW. Between the time that an administrator places an order and receives the new SAW, we provide temporary access to a SAW device so they can still get their work done.

We ensure security at every step within our supply chain. That includes using a dedicated manufacturing line exclusive to SAWs, ensuring chain of custody from manufacturing to end-user validation. Since SAWs are built and configured for the specific user rather than pulling from existing inventory, the process is much different from how we provision standard corporate devices. The additional security controls in the SAW supply chain add complexity and can make scaling a challenge from the global-procurement perspective.

Supporting the administrator

SAWs come with dedicated, security-aware support services from our Secure Admin Services (SAS) team. The SAS team is responsible for the HRE and the critical SAW devices—providing around-the-clock role-service support to administrators.

The SAS team owns and supports a service portal that facilitates SAW ordering and fulfillment, role management for approved users, application and URL hosting, SAW assignment, and SAW reassignment. They’re also available in a development operations (DevOps) model to assist the teams that are adopting SAWs.

As different organizations within Microsoft choose to adopt SAWs, the SAS team works to ensure they understand what they are signing up for. The team provides an overview of their support and service structure and the HRE/SAW solution architecture, as illustrated in the graphic below.

A high-level overview of the HRE/SAW solution architecture, including SAS team and DevOps support services.
An overview of an isolated HRE, a SAW, and the services that help support administrators.

Today, the SAS team provides support service to more than 40,000 administrators across the company. We have more work to do as we enforce SAW usage across all teams in the company and stretch into different roles and responsibilities.

Password vaulting

The password-vaulting service allows passwords to be securely encrypted and stored for future retrieval. This eliminates the need for administrators to remember passwords, which has often resulted in passwords being written down, shared, and compromised.

SAS Password Vaulting is composed of two internal, custom services currently offered through our SAS team:

  • A custom solution to manage domain-based service accounts and shared password lists.
  • A local administrator password solution (LAPS) to manage server-local administrator and integrated Lights-Out (iLO) device accounts.

Password management is further enhanced by the service’s capability to automatically generate and roll complex random passwords. This ensures that privileged accounts have high-strength passwords that are changed regularly and reduces the risk of credential theft.

Administrative policies

We’ve put administrative policies in place for privileged-account management. They’re designed to protect the enterprise from risks associated with elevated administrative rights. Microsoft Digital reduces attack vectors with an assortment of security services, including SAS and Identity and Access Management, that enhance the security posture of the business. Especially important is the implementation of usage metrics for threat and vulnerability management. When a threat or vulnerability is detected, we work with our Cyber Defense Operations Center (CDOC) team. Using a variety of monitoring systems through data and telemetry measures, we ensure that compliance and enforcement teams are notified immediately. Their engagement is key to keeping the ecosystem secure.

Just-in-time entitlement system

Least-privileged access paired with a just-in-time (JIT) entitlement system provides the least amount of access to administrators for the shortest period of time. A JIT entitlement system allows users to elevate their entitlements for limited periods of time to complete elevated-privilege and administrative duties. The elevated privileges normally last between four and eight hours.

JIT allows removal of users’ persistent administrative access (via Active Directory Security Groups) and replaces those entitlements with the ability to elevate into roles on-demand and just-in-time.e used proper RBAC approaches with an emphasis on providing access only to what is absolutely required. We also implemented access controls to remove excess access (for example, Global Administrator or Domain Administrator privileges).

An example of how JIT is part of our overarching defense-in-depth strategy is a scenario in which an administrator’s smartcard and PIN are stolen. Even with the physical card and the PIN, an attacker would have to successfully navigate a JIT workflow process before the account would have any access rights.
Key Takeaways

In the three years this project has been going on, we have learned that an ongoing commitment and investment are critical to providing defense-in-depth protection in an ever-evolving work environment. We have learned a few things that could help other companies as they decide to better protect their administrators and, thus, their company assets:

  • Securing all environments. We needed to evolve the way we looked at our environments. Through evolving company strategy and our Red Team/PEN testing, it has been proven numerous times that successful system attacks take advantage of weak controls or bad hygiene in a development environment to access and cause havoc in production.
  • Influencing, rather than forcing, cultural change. Microsoft employees have historically had the flexibility and freedom to do amazing things with the products and technology they had on hand. Efforts to impose any structure, rigor, or limitation on that freedom can be challenging. Taking people’s flexibility away from them, even in the name of security, can generate friction. Inherently, employees want to do the right thing when it comes to security and will adopt new and better processes and tools as long as they understand the need for them. Full support of the leadership team is critical in persuading users to change how they think about security. It was important that we developed compelling narratives for areas of change, and had the data and metrics to reinforce our messaging.
  • Scaling SAW procurement. We secure every aspect of the end-to-end supply chain for SAWs. This level of diligence does result in more oversight and overhead. While there might be some traction around the concept of providing SAWs to all employees who have elevated-access roles, it would still be very challenging for us to scale to that level of demand. From a global perspective, it is also challenging to ensure the required chain of custody to get SAWs into the hands of administrators in more remote countries and regions. To help us overcome the challenges of scale, we used a phased approach to roll out the Admin SAW policy and provision SAWs.
  • Providing a performant SAW experience for the global workforce. We aim to provide a performant experience for all users, regardless of their location. We have users around the world, in most major countries and regions. Supporting our global workforce has required us to think through and deal with some interesting issues regarding the geo-distribution of services and resources. For instance, locations like China and some places in Europe are challenging because of connectivity requirements and performance limitations. Enforcing SAW in a global company has meant dealing with these issues so that an administrator, no matter where they are located, can effectively complete necessary work.

What’s next

As we stated before, there are no silver-bullet solutions when it comes to security. As part of our defense-in-depth approach to an ever-evolving threat landscape, there will always be new initiatives to drive.

Recently, we started exploring how to separate our administrators from our developers and using a different security approach for the developer roles. In general, developers require more flexibility than administrators.

There also continue to be many other security initiatives around device health, identity and access management, data loss protection, and corporate networking. We’re also working on the continued maturity of our compliance and governance policies and procedures.

Getting started

While it has taken us years to develop, implement, and refine our multitiered, defense-in-depth approach to security, there are some solutions that you can adopt now as you begin your journey toward improving the state of your organization’s security:

  • Design and enforce hygiene. Ensure that you have the governance in place to drive compliance. This includes controls, standards, and policies for the environment, applications, identity and access management, and elevated access. It’s also critical that standards and policies are continually refined to reflect changes in environments and security threats. Implement governance and compliance to enforce least-privileged access. Monitor resources and applications for ongoing compliance and ensure that your standards remain current as roles evolve.
  • Implement least-privileged access. Using proper RBAC approaches with an emphasis on providing access only to what is absolutely required is the concept of least-privileged access. Add the necessary access controls to remove the need for Global Administrator or Domain Administrator access. Just provide everyone with the access that they truly need. Build your applications, environments, and tools to use RBAC roles, and clearly define what each role can and can’t do.
  • Remove all persistent access. All elevated access should require JIT elevation. It requires an extra step to get temporary secure access before performing elevated-privilege work. Setting persistent access to expire when it’s no longer necessary narrows your exposed attack surface.
  • Provide isolated elevated-privilege credentials. Using an isolated identity substantially reduces the possibility of compromise after a successful phishing attack. Admin accounts without an inbox have no email to phish. Keeping the information-worker credential separate from the elevated-privilege credential reduces the attack surface.

Microsoft Services can help

Customers interested in adopting a defense-in-depth approach to increase their security posture might want to consider implementing Privileged Access Workstations (PAW). PAWs are a key element of the Enhanced Security Administrative Environment (ESAE) reference architecture deployed by the cybersecurity professional services teams at Microsoft to protect customers against cybersecurity attacks.

For more information about engaging Microsoft Services to deploy PAWs or ESAE for your environment, contact your Microsoft representative or visit the Cybersecurity Protection page.

Reaping the rewards

Over the last two years we’ve had an outside security audit expert perform a cyber-essentials-plus certification process. In 2017, the security audit engineers couldn’t run most of their baseline tests because the SAW was so locked down. They said it was the “most secure administrative-client audit they’ve ever completed.” They couldn’t even conduct most of their tests with the SAW’s baseline, locked configuration.

In 2018, the security audit engineer said: “I had no chance; you have done everything right,” and added, “You are so far beyond what any other company in the industry is doing.”

Also, in 2018, our SAW project won a CSO50 Award, which recognizes security projects and initiatives that demonstrate outstanding business value and thought leadership. SAW was commended as an innovative practice and a core element of the network security strategy at Microsoft.

Ultimately, the certifications and awards help validate our defense-in-depth approach. We are building and deploying the correct solutions to support our ongoing commitment to securing Microsoft and our customers’ and partners’ information. It’s a pleasure to see that solution recognized as a leader in the industry.
Related links

We'd like to hear from you!
Want more information? Email us and include a link to this story and we’ll get back to you.

Please share your feedback with us—take our survey and let us know what kind of content is most useful to you.

The post Improving security by protecting elevated-privilege accounts at Microsoft appeared first on Inside Track Blog.

]]>
9774