Jason Kellington, Author at Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/author/v-jaske/ How Microsoft does IT Wed, 15 Apr 2026 23:44:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 137088546 Powering the technical veracity of AI at Microsoft with a Center of Excellence http://approjects.co.za/?big=insidetrack/blog/powering-the-technical-veracity-of-ai-at-microsoft-with-a-center-of-excellence/ Thu, 16 Apr 2026 14:15:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23147 When we launched our AI Center of Excellence (CoE) in 2023, we had a straightforward goal: Help our organization experiment with AI, learn quickly, and do it responsibly. Our teams across Microsoft Digital—the company’s internal IT organization—leaned in. We built tools, workflows, and AI enabled solutions at speed. Momentum followed, along with real enthusiasm and […]

The post Powering the technical veracity of AI at Microsoft with a Center of Excellence appeared first on Inside Track Blog.

]]>
When we launched our AI Center of Excellence (CoE) in 2023, we had a straightforward goal: Help our organization experiment with AI, learn quickly, and do it responsibly.

Our teams across Microsoft Digital—the company’s internal IT organization—leaned in. We built tools, workflows, and AI enabled solutions at speed. Momentum followed, along with real enthusiasm and growth.

A photo of Wu.

“We did a lot of good work building community and excitement. But at some point, we needed to evolve and put more structure around what we’d built.”

Qingsu Wu, principal group product manager, Microsoft Digital

But increasing scale required us to evolve our approach.

As adoption accelerated, we began to see duplication, uneven governance, and growing gaps between strategy and delivery. What helped us move fast early on wasn’t enough to sustain impact over time.

“We did a lot of good work building community and excitement,” says Qingsu Wu, a principal group product manager who leads the AI CoE at Microsoft Digital. “But at some point, we needed to evolve and put more structure around what we’d built.”

AI agents and solutions began appearing across Microsoft Digital. Different teams solved similar problems. Standards were interpreted differently. Reporting was inconsistent, and in many cases manual.

The question was no longer, “How do we help teams try AI?” It became, “How do we turn AI into consistent, measurable outcomes at scale?”

Answering that question required a change in how our CoE operated.

Rather than acting primarily as an advisory group, the AI CoE evolved into an execution‑focused function. Its role expanded from guidance to coordination, helping set priorities, define guardrails, and connect AI work directly to business outcomes.

The goal wasn’t to slow AI innovation down, but to help it move in the correct direction with more agility and better scalability

Evaluating AI for Microsoft

The AI CoE connects AI strategy to execution across Microsoft Digital. It operates as a cross‑functional coordination layer that sets direction and creates shared accountability for how AI work gets done.

A photo of Khetan.

“We can see patterns that a single team can’t. We’re translating AI CoE strategy and enterprise priorities into clear execution plans that work in each organization’s context. That helps us align priorities and make sure the biggest bets are actually landing.”

Ria Khetan, senior program manager, Microsoft Digital

The CoE brings our leaders and practitioners together from AI, data, responsible AI, and operations to answer questions collectively. We use that cross‑disciplinary view to operate above individual projects without losing touch with day‑to‑day reality.

The CoE looks across the organization and answers questions individual teams can’t answer on their own.

  • What AI initiatives are already in flight?
  • Which ones matter most to the business?
  • Where are teams duplicating effort?
  • Where do we need clearer standards or stronger governance?

“We can see patterns that a single team can’t,” says Ria Khetan, a senior program manager in Microsoft Digital who helps lead program management for the AI CoE. “We’re translating AI CoE strategy and enterprise priorities into clear execution plans that work in each organization’s context. That helps us align priorities and make sure the biggest bets are actually landing.”

We’ve designed the AI CoE to act as the connective tissue between leadership intent and execution on the ground. It helps ensure that AI work across Microsoft Digital moves forward with purpose, consistency, and measurable impact.

Building transformation on core pillars

The AI CoE establishes a common structure that helps our teams work toward the same outcomes, even when they are building different solutions.

A photo of Campbell.

“We use the CoE to bring consistency to how AI work gets done. It gives us a way to step back and ask whether we’re solving the right problems and whether we’re set up to scale.”

Don Campbell, principal group technical program manager, Microsoft Digital

The operating model is intentionally simple.

AI initiatives are reviewed against shared pillars that help teams think beyond individual projects. These lenses ensure the work aligns to business priorities, can scale safely, has a clear delivery path, and supports responsible adoption.

“We use the CoE to bring consistency to how AI work gets done,” says Don Campbell, a principal group technical program manager who leads AI strategy here in Microsoft Digital. “It gives us a way to step back and ask whether we’re solving the right problems and whether we’re set up to scale.”

Our CoE uses these four pillars to guide our work:

  • Strategy. We work with product and feature teams to determine what we want to achieve with AI. They define business goals and prioritize the most important implementations and investments.
  • Architecture. We enable infrastructure, data, services, security, privacy, scalability, accessibility, and interoperability for all our AI use cases.
  • Roadmap. We build and manage implementation plans for all our AI projects, including tools, technologies, responsibilities, targets, and performance measurement.
  • Culture. We foster collaboration, innovation, education, and responsible AI among our stakeholders.

These pillars are the common language that helps us connect strategy to execution and make decisions across all teams and scenarios at Microsoft Digital.

Strategy

Our CoE strategy team’s role is to step back and create clarity.

Our strategy is driven from the organization’s top level, and executive sponsorship is crucial to executing our implementation well. When our transformation mandate comes from the organization’s leader, it resonates in every corner of the organization, every piece of work, and every task. We also encourage and welcome ideas from every level of the organization, empowering individuals to contribute their AI insights.

We maintain a centralized view of AI initiatives across Microsoft Digital, including agents, workflows, and AI‑enabled solutions. That visibility allows our CoE team to identify duplication, surface opportunities to scale successful ideas, and align investments to enterprise priorities. This creates a shared intake and prioritization model.

One of our CoE strategy team’s most significant responsibilities is prioritizing the idea pipeline for AI solutions. All employees can feed ideas into the pipeline through a form that records important details. The strategy team then evaluates each idea, analyzing two primary metrics:

  • Business value. How important is the solution to our business? Potential cost reduction, market opportunity, and user impact all factor into business value. As our business value increases, so does the idea’s position in the pipeline priority queue.
  • Implementation effort. We focus on clearly defining the problem statement—what the problem is, why it matters, who the customer is, the baseline metrics, and the plan to attribute value pre‑production. This ensures we prioritize AI for the most critical business problems and can measure impact before and after deployment.

By anchoring AI work in business outcomes from the start, the strategy pillar helps ensure the organization’s energy is spent on the work that matters most.

Architecture

Our architecture pillar defines how we help teams scale AI solutions without creating security gaps, compliance issues, or technical debt they’ll have to unwind later.

“The CoE introduces a framework to enable design reviews in the early development phase. We help make sure teams are choosing the right platforms and thinking about security and compliance from the beginning.”

Qingsu Wu, principal group product manager, Microsoft Digital

Before solutions move into broader use, our architecture team helps think through data readiness, platform alignment, and governance requirements. The goal isn’t to prescribe a single architecture, but to make sure foundational decisions won’t limit scale or create risk down the line. Many times, this means doing things before development, while other times it means making improvements after the initial development is done and the product or scenario is launched and being used. We also track our efforts with measurable metrics like usage.

One common pitfall is that teams may gravitate toward the most flexible platforms with full control, without fully understanding the associated security and compliance implications. To address this, we publish clear guidance to help teams choose the right platform—one that strikes the appropriate balance between flexibility and the security and compliance effort required.

Our architecture pillar helps prevent that by reinforcing a set of common expectations. Teams still build locally and move fast, but they do so within a framework that supports reuse, interoperability, and responsible operation built on enabling teams and employees to experiment with guardrails that keep our production systems safe.

“The CoE introduces a framework to enable design reviews in the early development phase,” Wu says. “We help make sure teams are choosing the right platforms and thinking about security and compliance from the beginning.”

Teams are encouraged to build on recommended platforms and services that support enterprise‑grade security, observability, and lifecycle management. This helps ensure solutions can be monitored, governed, and supported over time.

Security and compliance are never treated as downstream checkpoints. Architectural guidance reinforces the need to design with identity, access controls, auditability, and responsible AI principles from the start.

When solutions prove valuable, we look for opportunities to reuse architectural patterns, components, or services rather than rebuilding them in isolation. This reduces duplication and accelerates future work.

Roadmap

Our CoE roadmap team examines our employee experience in the context of our AI solutions and governs how we achieve the optimal experience in and throughout AI projects. It focuses on how our employees will interact with AI. Getting the roadmap right ensures user experiences are cohesive and align with our broader employee experience goals.

We’ve recognized AI’s potential to impact how our employees get their work done.

Their experiences and satisfaction levels with AI services and tools are critical. Our roadmap pillar is designed to encourage experiences across all these services and tools that are complementary and cohesive.

We’re focusing on the open nature of AI interaction.

“We’re surfacing AI capabilities and information when the user needs them, according to their context,” Campbell says. “It makes the user experience and user interface for an AI service less important than how the service allows other applications or user interfaces to interact with it and harness its power.”

A key part of this approach is disciplined experimentation.

Rather than treating every idea as a long‑term commitment, the roadmap pillar helps teams validate value early. Our teams know when they’re in an experimental phase and when they’re expected to operationalize. This gives our leaders a more consistent view of progress and risk. The net result is that dependencies between teams surface earlier, when they’re easier to resolve.

Culture

Our culture pillar ensures that AI adoption across Microsoft Digital is intentional, responsible, and sustainable.

Culture underpins everything we do in the AI space. Ensuring our employees can increase their AI skillsets and access guidance for using AI responsibly are critical to AI at Microsoft.

“We’re driving a shift from ad‑hoc AI usage to intentional, outcome‑driven adoption,” Khetan says. “That requires clarity, education, and shared expectations.”

In practice, that means the culture pillar defines how our teams are expected to adopt AI and integrate it into their work, not just what tools they can use.

Our culture team works with AI champions across the organization to translate enterprise AI priorities into local execution. Those champions act as two‑way conduits, bringing real‑world feedback and blockers back to the CoE and carrying guidance, standards, and learnings back to their teams.

Without this structure, AI adoption tends to fragment as teams experiment in isolation.

Our culture team has published training, recommended practices, and our shared learnings on next-generation AI capabilities. We work with individual business groups at Microsoft to determine the needs of all the disciplines across the organization. That work extends to groups as diverse as engineering, facilities and real estate, human resources, legal, sales, and marketing, among others. 

Responsible AI is embedded throughout that work.

The CoE reinforces responsible AI practices as part of everyday decision‑making—during design, experimentation, and scale. Teams are expected to understand not just what they’re building, but the implications of how they build it.

In the AI CoE, culture isn’t abstract. It shows up in how teams propose ideas, how they design solutions and how they measure success.

Fostering agent innovation

The true value of the AI CoE is evident when strategy, architecture, roadmap, and culture come together around real work.

A clear example of that is how we addressed the rapid growth of AI agents across the organization.

A photo of Tiwari.

“That’s the core problem we’re trying to solve. In the past, admins had to go to multiple portals just to understand how many agents exist, and they all give different answers.”

Garima Tiwari, principal product manager, Microsoft Digital

Our teams were building agents in different platforms, for different scenarios, and at very different levels of maturity. That flexibility accelerated innovation, but it also made it difficult to answer basic questions.

  • How many agents exist today?
  • Which ones are in production?
  • Which ones touch sensitive data?

The strategy lens helped clarify what mattered most. Our goal wasn’t to inventory every experiment. It was to gain visibility into agents that were active, scaling, or depended on by others, and to ensure those agents aligned to business priorities and Responsible AI expectations.

Architecture quickly followed.

As the CoE looked at how agents were built, we quickly discovered that information about agents was fragmented across tools. Different platforms showed different numbers. Ownership wasn’t always clear. And governance signals were hard to reconcile.

“That’s the core problem we’re trying to solve,” says Garima Tiwari, a principal product manager in Microsoft Digital leading our internal strategy and adoption of Agent 365. “In the past, admins had to go to multiple portals just to understand how many agents exist, and they all give different answers.”

This is where Agent 365—which we use to govern agents here at Microsoft—became a critical enabler.

Agent 365 brings together signals from multiple agent‑building platforms into a single, consolidated view. That visibility allows the CoE and administrators to understand agent inventory, ownership, lifecycle state, and governance posture in one place.

“Agent 365 is really about accurate inventory and observability,” Garima says. “It provides one number we can trust and a way to see how agents are behaving, who they’re interacting with, and whether they’re compliant.”

That architectural clarity changed how decisions were made.

Instead of guessing what was safe to scale, the CoE could see which agents were production‑ready, which needed remediation, and which should remain in experimentation. Security, privacy, and compliance considerations moved to earlier in the lifecycle.

“We can’t scale what we don’t understand,” Wu says. “Agent 365 helps us see what’s actually running so we’re not scaling something blindly.”

The roadmap lens then brought structure to execution.

“What changed was the mindset. Teams started thinking about manageability, security, and scale much earlier, not after an agent was already deployed.”

Don Campbell, principal group technical program manager, Microsoft Digital

Rather than standardizing everything at once, the CoE helped teams sequence work. Some agents stayed in pilot. Others moved toward broader rollout, informed by architectural and governance signals surfaced through Agent 365.

Culture and enablement ran alongside that work.

Teams began factoring operational readiness into design decisions instead of treating governance as a final checkpoint. Agent 365 isn’t positioned as a control tool at the end of the process, but as part of building agents the right way from the start.

“What changed was the mindset,” Campbell says. “Teams started thinking about manageability, security, and scale much earlier, not after an agent was already deployed.”

The outcome wasn’t a single standardized solution.

It was a repeatable approach within a shared CoE framework, supported by platforms like Agent 365, that made scaling AI more visible, more manageable, and more intentional.

That’s what the AI CoE enables at Microsoft Digital.

Key takeaways

If you’re just starting to consider AI usage at your organization, or if you’re already creating a standardized approach to AI, consider the following:

  • Start with outcomes, not tools. AI work scales faster when teams align on the business problem first and select technology second.
  • Design for scale from day one. Early architectural decisions around data, security, and platforms determine whether solutions can grow—or need to be rebuilt.
  • Make experimentation disciplined. Clear paths from prototype to production help teams move fast without committing to ideas that haven’t proven value.
  • Treat governance as an enabler, not a gate. Visibility and manageability, supported by platforms like Agent 365, make it easier to scale AI responsibly.
  • Create shared accountability. Standard metrics and automated reporting turn AI activity into measurable progress.

The post Powering the technical veracity of AI at Microsoft with a Center of Excellence appeared first on Inside Track Blog.

]]>
23147
Protecting anonymity at scale: How we built cloud-first hidden membership groups at Microsoft http://approjects.co.za/?big=insidetrack/blog/protecting-anonymity-at-scale-how-we-built-cloud-first-hidden-membership-groups-at-microsoft/ Thu, 26 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22465 Some Microsoft employee groups can’t afford to be visible. For years, we supported email‑based communities internally here at Microsoft whose very existence depends on anonymity. These include employee resource groups, confidential project teams, and other sensitive audiences where simply revealing who belongs can create real‑world risk. Traditional distribution groups make membership discoverable by default. Owners […]

The post Protecting anonymity at scale: How we built cloud-first hidden membership groups at Microsoft appeared first on Inside Track Blog.

]]>
Some Microsoft employee groups can’t afford to be visible.

For years, we supported email‑based communities internally here at Microsoft whose very existence depends on anonymity. These include employee resource groups, confidential project teams, and other sensitive audiences where simply revealing who belongs can create real‑world risk.

Traditional distribution groups make membership discoverable by default. Owners can see members. Admins can see members. In some cases, other users can infer membership through directory queries or tooling.

That model doesn’t work when anonymity is a requirement.

A photo of Reifers.

“When the SFI wave hit, it was made clear to us that we needed to keep our people safe, and to do that, we needed to build a new hidden memberships group MVP. We needed to raise the bar with modern groups, and we needed to do it in six months or miss meeting our goals.”

Brett Reifers, senior product manager, Microsoft Digital

For over 15 years, we relied on a custom, on‑premises solution that enabled employees to send and receive messages through groups with fully hidden memberships.

The system worked, but we were deprecating the Microsoft Exchange servers that it ran on. At the same time, we were also deploying our Secure Future Initiative (SFI), which required us to reassess legacy systems that could expose sensitive data or slow incident response, including hidden membership groups.

The system wasn’t broken, but it represented concentrated risk simply by existing outside our modern cloud controls and monitoring.

“When the SFI wave hit, it was made clear to us that we needed to keep our people safe, and to do that, we needed to build a new hidden memberships group MVP,” says Brett Reifers, a product manager in Microsoft Digital, the company’s IT organization. “We needed to raise the bar with modern groups, and we needed to do it in six months or miss meeting our goals.”

The mandate was clear. Preserve anonymity, eliminate on‑premises dependencies, and do it quickly.

A photo of Carson.

“Our solution would enable us to deprecate our legacy on-premises Exchange hardware while maintaining the privacy of our employee groups, and it would do so in a cloud-first manner.”

Nate Carson, principal service engineer, Microsoft Digital

Instead of retrofitting hidden membership into standard Microsoft 365 groups, we asked a different question: What if the group lived somewhere else entirely? What if users interacted with a simple, secure front end, while all membership expansion and mail flow occurred in a locked‑down tenant built specifically for this purpose?

That idea became the foundation for Hidden Membership Groups: A new cloud‑first architecture that would separate user experience, leverage first‑party Microsoft services, and keep our group memberships hidden from everyone—including owners and administrators—by design.

“Our solution would enable us to deprecate our legacy on-premises Exchange hardware while maintaining the privacy of our employee groups, and it would do so in a cloud-first manner,” says Nate Carson, a principal service engineer in Microsoft Digital.

Once we settled on a solution, our next step was to get support for solving a problem not many people thought much about.

“Not everyone was aware of how serious of a situation we were in,” Carson says. “We had to show everyone what was at stake, and to share our solution with them.”

After taking their plan on the road, the team got the buy in it needed, and that’s when the real work started.  

Planning to solve business problems with security built-in

Before we designed anything, we had to be clear about what success meant.

Hidden Membership Groups aren’t just another collaboration feature. They support scenarios where anonymity wasn’t optional—it’s foundational. That reality shaped every requirement that we built into our solution, including:

1. Absolute privacy

Group membership couldn’t be immediately visible to users, group owners, or administrators–under any circumstances. That requirement immediately ruled out standard group models.

2. Cloud only

Any new solution had to live entirely in our cloud, use first‑party services, and align with modern identity, security, and compliance practices. On‑premises infrastructure wasn’t an option.

3. Scale

Some groups had a handful of members. Others had tens of thousands. Membership changed frequently, and those changes had to propagate safely and predictably without exposing data or degrading performance.

4. Separation of concerns

User interaction and membership truth couldn’t live in the same place. Employees needed a simple way to discover groups, request access, and manage participation, without ever interacting with the system that stored or expanded membership.

5. Self‑service with guardrails

The solution needed to reduce operational overhead, not introduce a new bottleneck. Group lifecycle management had to be automated, auditable, and secure, while still giving teams flexibility.

6. Simple to use

Employees shouldn’t need special training. They shouldn’t need to understand tenants, identity synchronization, or mail routing. The experience needed to be intuitive, consistent, and accessible—without compromising security.

Once those requirements were clear, our solution started to emerge. Incremental changes wouldn’t be enough. A traditional group model wouldn’t work. The solution required a new architecture—one designed around isolation, automation, and intentional limitation.

That’s when we started the engineering work.

Creating a cloud-first architecture

Designing for hidden membership meant eliminating ambiguity. If any surface could reveal membership, even indirectly, it didn’t belong in the design.

That constraint led us toward a model built on strict isolation, explicit APIs, and intentionally narrow interfaces. The result is straightforward to use, but deliberately difficult to interrogate.

Two tenants, with sharply separated responsibilities

At the foundation of the solution is a two‑tenant model.

Our primary Microsoft 365 tenant is where employees authenticate, discover groups, and initiate actions. A secondary, isolated tenant hosts the distribution lists and performs mail expansion for Hidden Membership Groups.

A photo of Mace.

“Tenant isolation is what makes the privacy guarantee real. By moving membership expansion to a tenant that users and owners can’t access, we removed the possibility of accidental exposure. The system simply doesn’t give you a place where membership can be seen.”

Chad Mace, principal architect, Microsoft Digital

That separation matters because the secondary tenant isn’t designed for interactive use. Only Exchange and the minimum directory constructs required for mail routing and expansion are enabled.

Operationally, when an employee sends email to a Hidden Membership Group, they send to a mail contact visible in the corporate tenant. That contact routes to the corresponding distribution group in the isolated tenant, where membership expansion occurs. Expanded messages are then delivered back in recipients’ inboxes in the corporate tenant, so sent and received mail lives where users already work.

“Tenant isolation is what makes the privacy guarantee real,” says Chad Mace, a principal architect in Microsoft Digital. “By moving membership expansion to a tenant that users and owners can’t access, we removed the possibility of accidental exposure. The system simply doesn’t give you a place where membership can be seen.”

Identity without interactive access

This isolated tenant only works if it can resolve recipients. To enable that, our development team used Microsoft Entra ID multi‑tenant organization identity sync to represent corporate users in the secondary tenant.

These identities are treated as business guest identities, and we disable sign‑in to prevent interactive access. The tenant can perform expansion, but nothing more.

However, complete isolation wasn’t technically possible. Privileged access always exists at some level. The design response was to minimize that exposure. Access to the isolated tenant is tightly restricted, and membership changes flow through automation rather than broad UI-based administration.

The goal: reduce exposure to the smallest viable operational group.

API-first automation as the control plane

With tenancy and identity model established, the team needed a single, consistent way to create groups, connect objects across tenants, and manage changes without introducing new administrative workflows. That’s where the APIs come in.

A photo of Pena II.

“We split the backend into multiple APIs so the system could scale without becoming fragile. That let us separate everyday operations from high-volume membership work and keep performance predictable.”

John Pena II, principal software engineer, Microsoft Digital

The backend is intentionally modular, split into three distinct APIs:

  • The control API handles group creation, configuration, and cross‑tenant coordination.
  • The membership API handles standard add and remove operations.
  • The bulk membership APIs handle large‑scale operations involving tens of thousands of users, with services designed to run long‑lived jobs, manage throttling, and recover from partial failures.

“We split the backend into multiple APIs so the system could scale without becoming fragile,” says John Pena II, a principal software engineer in Microsoft Digital. “That let us separate everyday operations from high-volume membership work and keep performance predictable.”

The APIs run as PowerShell-based Azure Functions and use managed identity patterns, including federated identity credentials, to securely connect across tenants.

Creating the user experience with PowerApps

For the front end, we built a Canvas app in Power Apps, backed by Dataverse. The goal was speed and flexibility, without compromising strict privacy boundaries.

By using Power Apps as the primary interaction layer, we deliver a secure, modern experience without unnecessary custom infrastructure. The Canvas app provides a single, focused surface for discovering, joining, and managing hidden membership groups, while all sensitive operations remain behind controlled APIs and tenant boundaries. This separation allows the team to iterate quickly on experience design without weakening the privacy guarantees that the solution depends on.

Power Platform also simplifies how security is being enforced across the solution. Dataverse enables fine‑grained, role‑based access, ensuring users only see data they’re entitled to see—while keeping sensitive membership information entirely out of the client layer. That reduces long‑term maintenance overhead and makes it easier to evolve the solution as requirements change.

“From the beginning, we designed everything with security roles and workflows in mind,” says Shiva Krishna Gollapelly, senior software engineer in Microsoft Digital. “Dataverse let us control who could see or change data without building additional APIs or storage layers, and keeping everything inside the Power Apps ecosystem saved us a lot of maintenance over time.”

Dataverse plays a precise role here: it maintains the datastore the app needs to function without becoming a secondary membership repository.

A photo of Amanishahrak.

“Using the Power Platform let us move fast, integrate deeply with Microsoft identity, and enforce security without building a full web stack from scratch.”

Bita Amanishahrak, software engineer II, Microsoft Digital

From a security posture perspective, Dataverse security is used intentionally to restrict what different users can see and do, and the Power App was developed with security roles and workflows in mind.

Short version: the app brokers intent, the APIs execute it, and all the pieces that need to stay separate do exactly that.

“Using the Power Platform let us move fast, integrate deeply with Microsoft identity, and enforce security without building a full web stack from scratch,” says Bita Amanishahrak, a software engineer in Microsoft Digital.

The architectural intent is consistent throughout—isolate the sensitive plane and ensure the user plane operates only through controlled interfaces.

Benefits and impact

The most important outcome of the new architecture is also the simplest: Hidden membership stays hidden.

Anonymity isn’t enforced by policy. It’s enforced by architecture. Membership data never appears in the user experience or administrative tooling, and it doesn’t surface as a side effect of scale.

“We’re no longer asking people to trust that we’ll handle sensitive membership carefully through process,” Reifers says. “The system makes exposure structurally impossible.”

The impact was immediate.

At launch, we migrated more than 2,200 hidden membership groups, representing over 200,000 users, from the legacy on‑premises system into the new cloud‑first architecture. Groups ranged from small, tightly controlled communities to audiences with tens of thousands of members, all supported without special handling.

“Some of these groups are massive,” Pena says. “We knew from the beginning we were dealing with memberships in the tens of thousands, which is why we designed bulk operations as a first‑class capability instead of an afterthought.”

The separation between routine APIs and bulk‑membership APIs proved critical, enabling large migrations and ongoing changes without degrading day-to-day performance.

Operationally, moving to a cloud‑only model reduced both risk and complexity. Decommissioning the on‑premises Exchange infrastructure eliminated specialized maintenance requirements and improved monitoring, auditing, and access controls alignment with our modern cloud standards.

Delivery speed also mattered. Driven by Secure Future Initiative urgency and strong executive sponsorship, the team designed and delivered a minimum viable product in less than six months.

“That timeline forced discipline,” Reifers says. “We focused on what mattered: Security, privacy guarantees, scale, and a UX that wouldn’t disrupt group owners and/or members that had relied on a 15-year old tool.”

Everything else was secondary.

A photo of Gollapelly.

“Most users never think about tenants or APIs. They just see a clean experience that does what they need, without exposing anything it shouldn’t.”

Shiva Krishna Gollapelly, senior software engineer, Microsoft Digital

From an employee perspective, the experience became simpler and safer. Users now interact through a Power Platform app consistent with the rest of Microsoft 365.

Discovering a group, requesting access, or leaving a group no longer requires understanding the architecture behind it.

“Most users never think about tenants or APIs,” Gollapelly says. “They just see a clean experience that does what they need, without exposing anything it shouldn’t.”

The result is sustainable. The platform protects anonymity at scale, simplifies operations, boosts resiliency, and can evolve without reopening core privacy questions.

Moving forward

Delivering the initial solution was only the beginning.

The team sees Hidden Membership Groups as more than a single solution. It’s a reusable pattern for sensitive collaboration in a cloud‑first world: isolate what matters most, automate everything else, and design experiences that don’t require trust to be safe.

As adoption grows, the team plans to support additional anonymity-sensitive scenarios while maintaining the same underlying model.

“We don’t want every sensitive scenario inventing its own workaround,” Mace says. “This gives us a pattern we can reuse confidently.”

Future priorities include improving lifecycle and ownership experiences, strengthening auditing and reporting for approved administrators, and enhancing self‑service workflows—without compromising membership privacy. If it risks exposing membership, it doesn’t ship.

With the legacy system fully retired, Reifers reflects on what the team accomplished to get here.

“We shipped a new enterprise pattern in six months using our first party tools,” Reifers says. “We achieved this because a stellar team cared about the mission. That’s the takeaway.”

Key takeaways

Use these tips to strengthen your privacy, simplify your operations, and future-proof your organization’s collaboration systems:

  • Prioritize privacy by design. Embed privacy considerations from the start to protect sensitive information in all collaboration scenarios.
  • Architect for scale. Treat bulk operations to support large groups efficiently as a first-class capability.
  • Automate and modernize workflows. Replace legacy systems with cloud-native solutions to reduce risk, improve transparency, and enable continuous improvement.
  • Streamline user experience. Provide intuitive, consistent interfaces that make it easy for users to access, join, or leave groups without requiring technical knowledge.
  • Enforce strict access and auditing controls. Align monitoring and administration with modern cloud standards to maintain security and accountability.
  • Create reusable patterns. Establish and share successful privacy patterns to avoid reinventing solutions for each new case.
  • Focus on operational simplicity and resilience. Design systems that are easy to maintain and improve, freeing up teams to concentrate on innovation rather than upkeep.

The post Protecting anonymity at scale: How we built cloud-first hidden membership groups at Microsoft appeared first on Inside Track Blog.

]]>
22465
Read our seven tips for shifting to a ‘cloud native’ device management strategy http://approjects.co.za/?big=insidetrack/blog/read-our-seven-tips-for-shifting-to-a-cloud-native-device-management-strategy/ Thu, 19 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22433 At Microsoft, we manage a large, diverse device estate, with more than 1 million devices in use by employees and teams across our global corporate network. For years, we stitched together insights across multiple tools, wrote custom queries, and maintained fragile reports just to answer basic questions. This approach slowed investigations and delayed patch targeting. […]

The post Read our seven tips for shifting to a ‘cloud native’ device management strategy appeared first on Inside Track Blog.

]]>
At Microsoft, we manage a large, diverse device estate, with more than 1 million devices in use by employees and teams across our global corporate network.

For years, we stitched together insights across multiple tools, wrote custom queries, and maintained fragile reports just to answer basic questions. This approach slowed investigations and delayed patch targeting.

We needed a faster, stronger, cloud-native path.

We’re investing in AI-powered predictive maintenance and intelligent troubleshooting to reduce friction in device management.”

Daniel Manalo, principal service engineer, Microsoft Digital

The advent of generative AI changed the way we manage our devices. Not only were we able to ask better questions and get targeted help right from the start, we also got faster and more relevant answers from across our entire device management estate.

It’s simpler. It’s faster. It scales with our environment. And we’re doing it natively in the cloud.

“We’re investing in AI-powered predictive maintenance and intelligent troubleshooting to reduce friction in device management,” says Daniel Manalo, a principal service engineer in Microsoft Digital, the company’s IT organization.

AI and machine learning help us find errors faster and fix them autonomously, in many cases. It reduces our downtime, prolongs lifespans of our devices, and ensures our employees have a consistent and productive experience with their devices.

Today, we’re applying this approach to everyday operations: Speeding investigations, simplifying updates, and tightening the loop from detection to remediation. The overarching goal remains consistent—reduce workloads, improve clarity, and move our discoveries to earlier in the risk window.

The role of Customer Zero in evolving modern device management

We serve as the company’s Customer Zero for our products here in Microsoft Digital. We run early capabilities in our own tenant, pressure‑test them at Microsoft scale, and feed what we learn straight back to engineering. The goal is simple: Turn good ideas into reliable features that any enterprise can use.

A photo of Selvaraj.

“We use our collective learnings from our internal deployments to improve our products, which makes them better for our employees and for our customers.”

 Senthil Selvaraj, principal group product manager, Microsoft Digital

Our Microsoft Digital teams work side-by-side with the Intune product group to modernize our device management approach. The Intune group builds and operates the platform, while we bring real‑world scenarios, signals, and guardrails. Together, we help develop, test, and deploy a better cloud-native product for our customers.

“We use our collective learnings from our internal deployments to improve our products, which makes them better for our employees and for our customers,” says Senthil Selvaraj, a principal group product manager in Microsoft Digital.

For the same reasons, we work hard to make sure that we deploy our tools and services in the same way our customers do.

“That enables everyone at the company to have good visibility into the experiences our customers will have when our products get to them,” Selvaraj says. “This makes us more accountable to our customers and helps us move quickly when improvements are needed.”

Customer Zero for device management spans more than Intune.

We partner across teams responsible for Microsoft Purview, Microsoft 365 Copilot, Microsoft Defender, Windows (Autopatch and Hotpatch), GitHub, and Microsoft Azure to produce comprehensive device management capabilities. These are the surfaces where we test, learn, and refine the end‑to‑end device management experience.

The loop is tight. We identify a need, prototype a solution with the product groups, roll it out to targeted rings, measure impact, and iterate. Those learnings inform what ships in Intune—from data-driven insights to built‑in prompts that surface device health data as a conversation, rather than a simple query.

“Using natural language reduces the time it takes us to figure out what’s going on. We are able to ask Security Copilot questions naturally, which allows us to hear the signals that need our immediate action faster.”

Mohit Malhotra, product manager, Microsoft Digital

The result is a safer, faster path to value with AI-driven device management, including clear ownership, faster remediation, and features that arrive tested against operational reality.

We’ve learned a lot as Customer Zero, and we’re passing those lessons on to you.

Modern device management: Seven tips

Here are seven important tips that we’ve compiled to help with your device management efforts.

Tip 1: Ask natural-language questions with Microsoft Security Copilot

We use the generative AI capabilities in Microsoft Security Copilot to query device and vulnerability data in plain language and get a unified answer that we can act on.

This allowed us to replace bespoke reports with targeted questions.

“Using natural language reduces the time it takes us to figure out what’s going on,” says Mohit Malhotra, a product manager in Microsoft Digital. “We are able to ask Security Copilot questions naturally, which allows us to hear the signals that need our immediate action faster.”

Security Copilot lets us ask about device posture, app versions, cybersecurity vulnerabilities (known as Common Vulnerabilities and Exposures, or CVEs), and exposure across Microsoft Defender and Intune, without stitching the data together by hand. We get the context we need and move faster from finding to fixing.

How we use it

  • Scope impact: “List Windows devices running <app/version> that are vulnerable, with owners and deployment rings.”
  • Prioritize work: “Group affected devices by business unit and model; show counts and severity.”
  • Verify reach: “Confirm which devices received <policy/package> in the last 48 hours; flag failures.”

Prompts we rely on

  • “Show devices affected by <CVE/app version> and summarize recommended remediation steps.”
  • “Break down exposure by ring and list top 5 models with highest risk.”
  • “Identify outliers that failed the last policy sync and provide reasons.”

Why it helps

  • Less toil: No custom pipelines to maintain.
  • Faster triage: Discovery and scoping happen in one interaction.
  • Clear next steps: Results align to our Intune targeting and scheduling paths.

Best practices

  • Start specific: Name the product, version, and time window, then broaden as needed.
  • Keep follow‑ups short: Quick pivots like “group by region” or “add owner emails” maintain momentum.
  • Act on the output: Use the device lists to target updates or policies in Intune, then validate results with a final check.

Note

  • We align usage with least‑privilege access and established approval paths so insights come from authoritative sources and actions land through the right channel.

Tip 2: Find knowledge fast with Microsoft 365 Copilot

We use Microsoft 365 Copilot to pull device context from email, chats, and documents, allowing us to troubleshoot issues faster and easier using generative AI.

Incidents start with questions, not dashboards, e.g. “Who owns this package? When did we change that policy? Where did we discuss the driver rollback?”

The answers to those questions live in mail threads, Teams chats, and planning docs. Before Copilot, we were forced to sift through these materials manually, which cost us time. Now we ask one question and get a summary with sources, people, and links. That keeps the investigation moving and reduces handoffs.

A photo of Griswold.

“Copilot helps scan noisy logs and points us to likely causes. Our old process of opening logs, interpreting opaque error strings, and validating a hunch took too long. Getting faster answers matters when incidents stack up.”

Michael Griswold, principal service engineering manager, Microsoft Intune

This also helps us during the coordination phase. We can surface the approver for a change, the engineer who ran the last mitigation, and the runbook section that explains the rollback steps. We make better decisions because we see the history and the intent, not just the current state. Then we line up the action in Intune with the right stakeholders already looped in.

How we use it

  • Asking for recent context on a device model, configuration, or app to see decisions and outcomes in one place.
  • Retrieving owners, approvers, and on‑call contacts named in Outlook and Teams messages related to the issue.
  • Pulling change notes and runbook updates tied to a policy or package before we request an update in Intune.

Prompts we rely on

  • “Summarize recent emails and Teams messages about <device model/app version> and list owners mentioned.”
  • “Find the change note or runbook update for <policy/package> from the last 14 days.”
  • “Show known issues linked to <KB/app> and who resolved the last occurrence.”

Why it helps

  • Less hunting: We replace ad hoc inbox and wiki searches with a single query.
  • Faster coordination: We identify the right stakeholders and prior decisions immediately.
  • Better decisions: We confirm history and context before proposing changes in Intune.

Best practices

  • Keep prompts scoped. Include product, version, and a timeframe to focus your results.
  • Respect boundaries. Align usage with least‑privilege access and existing approval and auditing paths.
  • Capture outcomes. Link summaries, owners, and key docs back to the incident record so future searches return richer context.

Note

  • Copilot gets better as more decisions and runbooks live in Microsoft 365, since that’s where the signals come from.

Tip 3: Accelerate log triage with GitHub Copilot, Visual Studio Code, and Log Analytics

We use GitHub Copilot in Visual Studio Code with Azure Monitor Log Analytics to explain errors, draft KQL, and shorten device log investigations.

“Copilot helps scan noisy logs and points us to likely causes,” says Michael Griswold, a principal service engineering manager with the Microsoft Intune product group. “Our old process of opening logs, interpreting opaque error strings, and validating a hunch took too long. Getting faster answers matters when incidents stack up.”

Now we keep the entire loop in one workspace. AI in GitHub Copilot interprets the event, proposes likely causes, and generates KQL to confirm or rule out scenarios. We move from symptom to validated pattern without bouncing across tools.

How we use it

  • Connect VS Code to your Log Analytics workspace and load the tables you need (e.g., inventory and update events).
  • Paste a minimal log sample with timestamps and device identifiers, so Copilot has context.
  • Ask Copilot to summarize the error, suggest probable causes, and produce KQL to test each path.
  • Run the query, review clusters and outliers, and request an alternate query or grouping if noise is high.

Prompts we rely on

  • “Explain this error in a device‑management context and list three validation checks.”
  • “Write KQL to find matching failures in the last 24 hours and group by model and policy.”
  • “Join device inventory with update events for device and surface anomalies.”

Why it helps

  • Faster pattern recognition: Proposed queries get us to evidence quickly.
  • Less context switching: Analysis and validation happen inside VS Code.
  • Cleaner handoff: Results map to our Intune actions for targeted remediation.

Best practices

  • Keep inputs tight: Provide a small, representative log snippet, the affected device attributes, and a precise time window.
  • Iterate on queries: Ask for different filters, joins, or time ranges when results are noisy.
  • Close the loop: Use the device list to drive policy or update changes in Intune and confirm fixes with a final query.

Note

  • This workflow is broadly repeatable with GitHub Copilot, Visual Studio Code, and Azure Monitor Log Analytics.

Tip 4: Keep firmware and drivers current with Intune update management

We use Intune firmware and driver update management to identify, approve, and deploy our OEM updates at scale.

“Staying current on firmware and drivers keeps devices stable and secure. With Intune, we stage updates, watch the rollout, and adjust before issues spread.”

Taqui Mohammad, senior service engineer, Microsoft Digital

Firmware and driver releases don’t land on a predictable schedule. Different vendors ship on different timelines, and a single environment can span hundreds of models.

Tracking this manually slows responses and leaves risk on the table. Intune centralizes the view so we can see what’s applicable, choose the right targets, and roll out updates with the same discipline we use for OS patches.

“Staying current on firmware and drivers keeps devices stable and secure,” says Taqui Mohammad, a senior service engineer in Microsoft Digital. “With Intune, we stage updates, watch the rollout, and adjust before issues spread.”

How we use it

  • Review applicability: Open the firmware and driver updates view to see available updates grouped by make and model.
  • Select a pilot: Target a small ring first (model, business unit, or region) and set short deadlines.
  • Plan time windows and restarts: Align deployments with maintenance windows and communicate expected reboots.
  • Monitor, then expand: Track success and failure signals, remediate issues, and scale to broader rings.

Configuration tips

  • Standardize categories: Separate firmware from drivers in policies so reporting and rollbacks are clean.
  • Use device tags consistently: Model, region, and business unit tags make scoping and expansion straightforward.
  • Define rollback steps: Document how to revert a driver or hold firmware for a specific model when needed.

Success checks

  • Compliance trend: Increased percentage of devices on the latest approved firmware and driver versions after each wave.
  • Incident correlation: Fewer support tickets related to device stability and peripherals on updated models.
  • Deployment reliability: Decreased failure rates as pilots catch issues before broad rollout.

Best practices

  • Pair with risk signals: Prioritize models tied to active vulnerabilities or incident clusters before broad rollout.
  • Keep rings small and fast: Validate quickly, then scale; long pilots hide issues and delay benefits.
  • Document exceptions: If a model needs a temporary hold due to app or peripheral compatibility, record the reason and set a review date.
  • Verify outcomes: Confirm update levels on target devices and scan for regressions in support queues.

Notes

  • Expect uneven arrival patterns across vendors and models; a weekly review cadence helps catch new updates without creating noise.
  • Treat firmware and drivers as first‑class updates; include them in regular compliance reports and reviews so they get consistent attention.
A photo of Rodriguez.

“Autopatch Update Readiness catches and resolves common blockers before deployment begins. What used to require manual checks and troubleshooting is now handled upfront, giving us smoother updates and a far more reliable experience for our employees.”

Dave Rodriguez, principal product manager, Microsoft Digital

Tip 5: Speed updates with Windows Autopatch, Hotpatch, and Auto Remediation Update Readiness

We use Windows Autopatch and Hotpatch to reduce disruptions and keep our devices current, and we pair them with automated readiness and remediation so our changes land safely and quickly.

Autopatch handles orchestration for quality updates and feature releases. We define rings that reflect business risk and user impact, then let the service pace deployments as health signals arrive.

“Autopatch Update Readiness catches and resolves common blockers before deployment begins,” says Dave Rodriguez, a principal product manager in Microsoft Digital. “What used to require manual checks and troubleshooting is now handled upfront, giving us smoother updates and a far more reliable experience for our employees.”

Where Hotpatch is available, we apply security updates without a reboot, which cuts downtime and helps us move faster on critical fixes. An automated readiness layer checks prerequisites, fixes common blockers, and confirms that devices are ready before rollout.

How we use it

  • Enroll eligible devices in Autopatch and map them to the right scope so ownership, reporting, and break‑glass procedures are clear.
  • Build rings that reflect business priority and user profiles (e.g., VIP laptops, frontline kiosks, engineering workstations, and lab devices).
  • Enable Hotpatch on supported SKUs and confirm policy alignment so security updates apply without restarts where possible.
  • Run readiness checks that verify update agent health, policy state, storage and battery requirements, VPN reachability, and available maintenance windows.
  • Auto‑remediate common blockers such as stale update caches, missing prerequisites, paused services, or conflicting policies before a device enters the next ring.
  • Start with small cohorts, monitor early signals like install rate and post‑update stability, validate rollback paths, then expand the scope deliberately.

Operational checks

  • Ring coverage ensures eligible devices are actually assigned to a ring and not stranded outside the managed flow.
  • App and driver smoke tests validate business‑critical apps, kernel drivers, and peripherals on pilot cohorts before broad rollout.
  • Safeguard holds and known‑issue tracking are able to watch for vendor or service flags, which can pause or throttle a ring until a fix is available.
  • Rollback readiness confirms who owns the decision, what steps they follow, and how telemetry proves the rollback succeeded on affected devices.

Why it helps

  • Continuous movement shortens exposure windows because healthy rings advance without waiting for a fixed date.
  • Fewer interruptions improve user experience, as Hotpatch removes the need for restarts on supported devices.
  • Higher success rates come from automated readiness and remediation, removing predictable failures before deployment.

Best practices

  • Use consistent device tags so rings map cleanly to models, regions, and business units, which keeps targeting and reporting trustworthy.
  • Keep pilots small and fast to find issues quickly, then scale once success criteria are met and rollback is validated.
  • Communicate maintenance expectations in plain language so users know timing, restart behavior, and how to report problems.
  • Pace by risk rather than calendar, advancing rings when health metrics and support signal quality are within thresholds.
  • Review deployment dashboards daily during rollout, adjust ring size or cadence when error rates rise, and capture lessons learned for the next wave.

Note

  • Hotpatch availability depends on your Windows edition and configuration, so confirm support and prerequisites as part of your scoping work.

Tip 6: Keep third‑party apps current with Intune Enterprise App Management

We use Intune Enterprise App Management to keep third‑party apps current without constant packaging work.

A photo of Arias.

“Third-party apps fall out of date fast, so we’re standardizing how they’re updated. We do that with Enterprise App Management, which gives us reliable packages and keeps us moving at a steady cadence.”

Humberto Arias, senior product manager, Microsoft Digital

Third‑party software drives real risk: version drift, silent installers change, and manual packaging pipelines break at the worst time.

With Enterprise App Management, we select from a managed catalog, set assignment and update rules, and let the service handle new versions as they ship. We spend our time on exceptions, not routine updates.

“Third-party apps fall out of date fast, so we’re standardizing how they’re updated,” says Humberto Arias, a senior product manager in Microsoft Digital. “We do that with Enterprise App Management, which gives us reliable packages and keeps us moving at a steady cadence.”

This approach also improves the user experience. Updates arrive in predictable windows and dependencies are handled in a timely manner. We avoid surprise prompts and failed installs that generate tickets. When we do need to pause or pin a version, we scope it cleanly and document the reason.

How we use it

  • Build a standard catalog that covers the common apps our users need and assign clear ownership for each title.
  • Configure update behavior to auto‑update.
  • Use rollout rings so pilots validate the installation success rate and app behavior before expanding to broad audiences.
  • Scope assignments with device tags such as model, region, or business unit to simplify targeting and reporting.
  • Monitor install and update status, investigate failures, and retry with adjusted timing or requirements when needed.
  • Capture exceptions for apps that need holds or custom steps and set review dates to revisit the decision.

Scenarios we run

  • Rapid response when a high‑risk CVE drops by prioritizing affected apps and moving them to the front of the update queue.
  • Version cleanup by removing outdated or duplicate installers so devices converge on a single approved release.
  • Conditional deployment for specialized teams by offering an app as available instead of required while still tracking adoption.

Why it helps

  • Less packaging toil because the catalog supplies current installers and metadata.
  • Faster patching for common apps because updates flow as they publish.
  • Better compliance reporting because versions and assignments are consistent across rings and groups.

Best practices

  • Keep an authoritative list of approved apps with owners, support notes, and rollback steps.
  • Coordinate maintenance windows for high‑impact apps so users can save work before enforced updates.
  • Require pilots for any app with add‑ins or drivers and validate workflows with real users before scaling.
  • Use uninstall assignments to remove unapproved or vulnerable software and block reinstallation where needed.
  • Document app‑level exceptions, including the rationale and a date to re‑evaluate.

Notes

  • Some apps need pre-install checks or post-install steps, so include scripts or detection rules where required.
  • Track license terms and usage for commercial titles so updates do not outpace entitlements.

Tip 7: Close the loop with Defender Vulnerability Management and Intune security tasks

We use Microsoft Defender Vulnerability Management with Intune to turn exposure insights into targeted actions that close risk fast.

“The Intune Vulnerability Agent gives us a clear list of issues by device and owner. It shortens our path from finding a problem to fixing it.”

Harshitha Digumarthi, senior product manager, Microsoft Digital

Incidents don’t end when we spot a CVE. They end when devices are fixed and verified.

Vulnerability Management gives us an AI-powered live inventory of devices, software, and configurations, then connects that inventory to known threats. It shows which versions run where, highlights misconfigurations, and explains why a device is at risk. We see the problem and the cause, not just a risk score.

“The Intune Vulnerability Agent gives us a clear list of issues by device and owner,” says Harshitha Digumarthi, a senior product manager at Microsoft Digital. “It shortens our path from finding a problem to fixing it.”

It also ranks what to fix first. Factors like severity level, exploit availability, active attacks, and business context all feed into the priority list, so that commensurate effort goes where it’s needed most. The service recommends specific actions such as updating, uninstalling, reconfiguring, or applying a policy as appropriate.

From there, it pushes the work into our change tools. Tasks flow to Intune, Autopatch, and Enterprise App Management so the remediation is traceable. Exceptions are tracked, including data on owners, compensating controls, and review dates. Closure is verified by watching exposure decrease and confirming the fix landed with the intended devices.

How we use it

  • Review exposure by CVE, software, and device group to see where risk concentrates.
  • Prioritize based on business impact, internet exposure, and privilege level so high‑value targets move first.
  • Select the fix that fits the issue, including app updates through Enterprise App Management, OS and quality updates through Autopatch or Hotpatch (where supported), firmware and drivers through Intune update management, or policy changes for configuration weaknesses.
  • Target the right scope using tags for model, region, and business unit so remediation lands where it’s needed.
  • Set deadlines and user experience settings that balance urgency with productivity.
  • Validate closure by rechecking exposure, confirming install success, and watching support signals for regressions.

What we monitor

  • Exposure trends over time, to prove that remediation is reducing risk.
  • Top vulnerable apps and models, so effort tracks where it matters most.
  • Noncompliant devices and owners, so follow‑ups are direct and accountable.
  • Exceptions that need compensating controls, documented rationale, and a review date.

Why it helps

  • Fewer handoffs because the same team that sees risk can initiate remediation.
  • Measurable outcomes because exposure and deployment data live in connected systems.
  • Consistent execution because rings, tags, and approvals follow the same patterns as other updates.

Best practices

  • Keep device tags authoritative so targeting and reporting stay reliable.
  • Use pilots even for urgent fixes to catch compatibility issues before broad rollout.
  • Link vulnerability records to Intune assignments so audit and learning loops are clear.
  • Communicate clearly with affected users about timing, restarts, and how to report problems.
  • Document exceptions with owners and expiration dates so temporary holds don’t become permanent.

Notes

  • Not every fix is an update, and some issues require a configuration change or feature disablement with clear rollback steps.
  • Least‑privilege access and standard approvals keep remediation fast without expanding risk.

Key takeaways

Our approach for managing devices and updates has changed. We shifted device and update management from manual hunting and ad hoc remediation to a connected loop that starts with a question and ends with verified resolution—reducing investigation time and speeding recovery.

A few lessons stand out:

  • Make natural language work by grounding it in trust. Natural language becomes a force multiplier when insights are drawn from authoritative data and access is tightly scoped.
  • Keep pilots small, fast, and intentional. Focused pilots surface issues early without slowing momentum or introducing unnecessary risk.
  • Standardize signals to build confidence. Consistent tagging and clear ownership make reports, deployment rings, and rollbacks easier to interpret and trust.
  • Control exceptions with discipline. Every exception requires a written rationale and a review date, ensuring temporary holds don’t become permanent policy.
  • Close the loop—every time. Verification matters as much as detection. We confirm outcomes and capture learnings to continuously improve the next cycle.

What we’re improving next:

  • Strengthen question‑to‑action flows. We’re deepening prompts and playbooks that connect Security Copilot and Intune so operators can move from investigation to scoped change in a single flow.
  • Expand Hotpatch adoption and measurement. As support broadens, we’re increasing usage and measuring the impact on downtime, reliability, and user experience.
  • Grow app coverage with clearer stability rules. We’re expanding Enterprise App Management while enforcing stronger version‑pinning guidance where predictability is critical.
  • Automate deployment decisions. Additional automation around ring placement, readiness checks, and rollback triggers will allow deployments to adapt to live health signals.
  • Accelerate investigations with reusable telemetry. We’re developing richer telemetry patterns and reusable KQL in Visual Studio Code to reduce noise and speed repeat investigations.

It’s a continuing evolution of our awareness and capabilities in device management, and we’ll keep improving on it, one loop at a time.

The post Read our seven tips for shifting to a ‘cloud native’ device management strategy appeared first on Inside Track Blog.

]]>
22433
Protecting AI conversations at Microsoft with Model Context Protocol security and governance http://approjects.co.za/?big=insidetrack/blog/protecting-ai-conversations-at-microsoft-with-model-context-protocol-security-and-governance/ Thu, 12 Feb 2026 17:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22324 When we gave our Microsoft 365 Copilot agents a simple way to connect to tools and data with Model Context Protocol (MCP), the work spoke for itself. Answers got sharper. Delivery sped up. New patterns of development emerged across teams working with Copilot agents. That ease of communication, however, comes with a responsibility: Protect the […]

The post Protecting AI conversations at Microsoft with Model Context Protocol security and governance appeared first on Inside Track Blog.

]]>
When we gave our Microsoft 365 Copilot agents a simple way to connect to tools and data with Model Context Protocol (MCP), the work spoke for itself.

Answers got sharper. Delivery sped up. New patterns of development emerged across teams working with Copilot agents.

That ease of communication, however, comes with a responsibility: Protect the conversation.

Questions came up like, who’s allowed to speak? What can they say? And what should never leave the room?

Microsoft Digital, the company’s IT organization, and the Chief Information Security Officer (CISO) team, our internal security organization, are leaning on those questions to help us shape our strategy and tooling around MCP internally at Microsoft.

A photo of Kumar.

“With MCP, the problem is not the inherent design; it’s that every improper server implementation becomes a potential vulnerability. Even one misconfigured server can give the AI the keys to your data.”

Swetha Kumar, security assurance engineer, Microsoft CISO

Our approach is intentionally straightforward.

Start secure by default. Use trusted servers. Keep a living catalog so we always know which voices are in the room. Shape how agents communicate by requiring consent before making changes.

We minimize what’s shared outside our walls, watch for drift, and act when something looks off. Our goal is practical governance that lets builders move fast while keeping our data safe.

That’s the risk we design for, and it’s why our controls prioritize clear ownership, simple choices, and visible guardrails.

“With MCP, the problem is not the inherent design; it’s that every improper server implementation becomes a potential vulnerability,” says Swetha Kumar, a security assurance engineer in the Microsoft CISO organization. “Even one misconfigured server can give the AI the keys to your data.”

Understanding MCP and the need for security

MCP is a simple standard that lets AI systems “talk” to the right tools and data without custom integration work. Think of it like USB‑C for AI. Instead of building a new connection every time, teams plug into a common pattern. That standardization delivers speed and flexibility—but it also changes the security equation.

Before MCP, every integration was its own isolated conversation.

“Now, one pattern can unlock many systems,” Kumar says. “It’s a win and a risk. When AI can reach more systems with less effort, we must be precise about who’s allowed to speak, what they can say, and how much gets shared.”

We frame this as communications security.

The question isn’t just, “Is this API secure?” It’s “Is this a conversation we trust?” We want to know which servers are in the room, what actions they’re permitted to take, and how we’ll notice if something changes. At the same time, we keep the cognitive load low for builders. They choose from trusted options, see clear prompts before an agent makes edits, and move on. Simple choices lead to safer outcomes.

“MCP enables granular control over the tools and resources exposed to the Large Language Model,” Kumar says. “But that means the developer is responsible for configuring it correctly—which tools an agent can see, what actions a server can take, and what context is shared.”

This approach helps both sides.

Product teams get a consistent way to extend their agents while security teams get consistent places to add guardrails—at discovery, access, and throughout the flow of requests and responses. Everyone operates from the same playbook.

When we treat MCP this way, we protect the conversation without slowing it down. We know who’s speaking. We know what they can do. And we can prove it.

Assessing MCP security across four layers

Every MCP session creates a conversation graph. An agent discovers a server, ingests its tool descriptions, adds credentials and context, and starts sending requests. Each step—metadata, identity, content, and code—introduces potential risk.

We evaluate those risks across four layers so we can catch failures early, contain blast radius, and keep conversations in bounds.

However, the big picture is just as important as the details.

“We take a holistic view of MCP security: start with the ecosystem, then specify controls across the four layers,” Kumar says. “The layers make the work concrete, but the goal stays the same—unified governance, shared education, and faster detect-and-mitigate when a server is at risk.”

Applications and agents layer

This is where user intent meets execution. Agents parse prompts, discover tools, select actions, and request changes. MCP clients live here, deciding which servers to trust and when to ask for user consent.

  • What can go wrong
    • Tool poisoning or shadowing. A server advertises safe‑looking actions but performs something else.
    • Silent swaps. A tool’s metadata changes and the client keeps trusting an altered “voice.”
    • No sandbox. The agent can request edits or run code without strong guardrails.
  • What we watch for
    • Unexpected tool descriptions or capabilities at connect time.
    • Edit attempts on critical resources without explicit user consent.
    • Abnormal tool‑selection patterns across sessions.

AI platform layer

The AI platform layer includes the AI models and runtimes that interpret prompts and call tools, along with orchestration logic and safety features.

  • What can go wrong
    • Model supply‑chain drift. Unvetted models, unsafe updates, or compromised fine‑tunes change behavior.
    • Prompt injection via tool text. Descriptions and responses steer the model toward unsafe actions.
  • What we watch for
    • Model provenance and update cadence tied to agent behavior changes.
    • Signals of jailbreaks or instruction overrides in prompts and intermediate messages.
    • Output drift linked to specific tools or servers.

Data layer

This layer covers business data, files, and secrets the conversation can touch.

  • What can go wrong
    • Context oversharing. Session data, files, or secrets get packed into the model’s context and leak to a third‑party server.
    • Over‑scoped credentials. Long‑lived tokens, broad scopes, or wrong audience claims enable lateral movement.
  • What we watch for
    • Size and sensitivity of context passed to tools.
    • Token hygiene, including short lifetimes, least‑privilege scopes, and correct audience claims.
    • Data egress patterns that don’t match a tool’s declared purpose.

Infrastructure layer

The infrastructure layer includes compute, network, and runtime environments.

  • What can go wrong
    • Local servers with too much reach. Excessive access to environment variables, file systems, or system processes.
    • Cloud endpoints without a gateway. No TLS enforcement, rate limiting, or centralized logging.
    • Open egress. Servers call out to the internet where they shouldn’t.
  • What we watch for
    • All remote MCP servers registered behind the API gateway.
    • Runtime signals, such as authentication failures, burst traffic, or unusual geographies.
    • Network policies that restrict outbound calls to certain targets.

Across all four layers, the throughline is AI communications security. We decide who can speak and verify what was said—and keep listening for change.

Establishing a secure-by-default strategy

We start by closing the front door. We recommend every remote MCP server sits behind our API gateway, giving us a single place to authenticate, authorize, rate‑limit, and log. There are no direct calls and no blind spots.

A photo of Enjeti

“Everything we do starts with securing the MCP server by default and that begins by registering it in API Center for easier discovery. We rely solely on vetted and attested MCP servers, ensuring every call comes from a trusted footprint.”

Prathiba Enjeti, principal PM manager, Microsoft CISO

Next, we decide who gets a voice.

Teams choose from a vetted list of MCP servers. If someone connects to an unapproved endpoint, they receive a friendly nudge and a clear path to register it. No shaming—just fast correction and a better inventory the next time around.

Identity comes next. Servers expect short‑lived, least‑privilege tokens with the right scopes and audience. Admin paths require strong authentication, and where possible, we use proof‑of‑possession to bind tokens to the client and reduce replay risk. Secrets don’t live in code, keys rotate, and audit trails are in place.

“Everything we do starts with making the MCP server secure by default and that begins by registering it in API Center for easier discovery,” says Prathiba Enjeti, a principal product manager in the Microsoft CISO organization. “We only use vetted and attested MCP servers. That’s how we keep the conversation safe without slowing it down.“

On the client side, we slow agents at the right moments. Agents can’t touch high‑risk tools without explicit consent. Tool descriptions are verified on connection and compared to approved contracts. If a tool’s “voice” drifts, we block the call.

We also minimize what’s shared.

Context is trimmed to what the task requires. Sensitive data isn’t included by default, and third‑party servers get only what they need—not the whole transcript. Output filters and prompt shields sit alongside the model to prevent risky inputs from becoming risky actions.

Isolation completes the design. Local servers run in containers with tight file and network permissions. Hosted servers allow only the outbound calls they need, and inbound traffic flows through the gateway, with TLS and logging enforced.

Simple rules with visible guardrails.

“We only use vetted MCP servers,” Enjeti says. “That’s how we keep the conversation safe without slowing it down.”

How we run MCP at scale: architecture, vetting, and inventory

We keep MCP safe by making three things intentionally boring: architecture, vetting, and inventory. One defined path. One vetting flow. One living catalog.

Architecture

We recommend remote MCP servers sit behind an API gateway, giving us a single place to authenticate, authorize, validate, rate‑limit, and log. Transport Layer Security (TLS) is required by default, and for sensitive endpoints, we can require mutual TLS. Outbound egress is pinned to approved destinations using private endpoints and firewall rules, so servers can’t “call anywhere.” Runtime protection continuously watches for credential abuse, injection patterns, burst traffic, and odd geographies.

Identity is established up front. We issue short‑lived, least‑privilege tokens with the correct audience and scopes, and admin paths require strong authentication. Where supported, tokens are bound to the client to reduce replay risk. Services use managed identities or signed credentials; secrets don’t live in code, and keys rotate on schedule.

Model‑side safety travels with every conversation. Content safety and prompt shields help models ignore risky inputs, while orchestration enforces a per‑tool allowlist, so an agent can’t call tools that aren’t in policy—even if the model suggests it. We also track model versions, allowing behavior changes to be correlated with updates.

Clients enforce consent at the edge. “Ask before edits” is enabled by default for write, delete, and configuration changes. When an agent connects, it verifies tool descriptions against the approved contract.

Observability ties it all together. We’re working toward logging tool calls, resource access, and authorization decisions end‑to‑end with correlation IDs. Detections flag abnormal tool selection, unexpected data egress, or edits without consent. Every server has an owner, a contract, and an approval record, and metadata changes automatically trigger re‑review. Kill switches live at both the client and the gateway when we need them.

Vetting

We don’t “connect and hope.”

Before any MCP server can speak in our environment, it earns trust. Owners declare what the server does (tools and actions), what it touches (data categories and exports), how callers authenticate (scopes and audience), and where it runs (runtime and on‑call ownership).

We start with static checks: manifests must match the contract, side‑effecting actions must be consent‑gated, tokens must be short‑lived and properly scoped. A SBOM (Software Bill of Materials) must be present, dependencies must be current, and no credentials can be embedded in code.

Then we test like a client would. We snapshot tool metadata on connect and compare it to the approved contract, probe for prompt‑injection and tool‑poisoning, and verify that “ask before edits” triggers for destructive actions.

We also confirm context minimization, validate that egress is pinned to approved hosts, and test resilience under load, including health checks, retry behavior, and isolation using containers with least‑privilege file and network access. Servers are published only when security, privacy, and responsible AI reviews are complete, runbooks and on‑call are in place, and the registry entry is created and pinned.

Inventory

A photo of Janardhanan

“Inventory is the foundation—if we miss a server, we miss the conversation. Every server, regardless of where it’s running or how it’s deployed, must be accounted for in our system.”

Priya Janardhanan, principal security assurance engineering manager, Microsoft CISO

You can’t govern what you can’t see, and MCP shows up in more places than a single system of record. To solve that, we’re building the map from signals and stitch them into one catalog.

“Inventory is the foundation—if we miss a server, we miss the conversation,” says Priya Janardhanan, a principal security assurance engineering manager at Microsoft CISO Operations. “Every server, regardless of where it’s running or how it’s deployed, must be accounted for in our system. Without a complete inventory, we lose visibility into critical operations, risk exposing sensitive data, and undermine our ability to ensure compliance and security.”

Our goal state is that Endpoint telemetry catches developer‑run servers on laptops and workstations. Repos and CI pipelines reveal intent before anything ships. IDEs (Integrated Development Environments) surface local extensions and configured endpoints. The gateway and our registries anchor what’s approved for business data, while low‑code environments tell us which connectors are in use and where they point.

We normalize and correlate those signals with stable IDs for servers, tools, and owners. Ownership is proven through repositories, gateway services, and environment administrators—on‑call contacts included. Exposure is scored based on data touches, scopes requested, egress rules, and change history, so high‑risk items rise to the top of the queue.

Freshness is tracked with last‑seen timestamps, and stale entries are retired over time. Builders can discover and reuse approved servers; reviewers can see what changed since the last approval, and admins get instant visibility into coverage and hotspots.

We’re working toward automated identification and notification for unknow servers. In the ideal state, a registration stub is created when we detect an unknown server on an endpoint. Then, the likely owner is notified, and direct calls are blocked until the server is vetted through an automated process. If tool metadata changes after approval, high-risk actions are paused and routed for re-review, then auto-resumed once approved.

“It all revolves around inventory as the foundation,” Janardhanan says. “If we miss a server, we miss the conversation.”

A photo of Hasan

“Agent 365 tooling servers will allow centralized governance for IT admins. That means a single pane where they can see what’s approved, who owns it, what data it touches, and then apply policy.”

Aisha Hasan, principal product manager, Microsoft Digital

Architecture gives us stable choke points. Vetting keeps weak servers out. Inventory keeps our map current. It’s a single pattern for builders and a unified playbook for security.

Governing agents in low‑code and pro-code scenarios

Makers move fast—that’s the point. A Customer Support team needed a Copilot action to pull case history, so they opened Copilot Studio, selected an approved MCP connector, and shipped a first version before lunch. No tickets. No detours. Governance showed up in the flow, not as a blocker.

“Agent 365 tooling servers will allow centralized governance for IT admins,” says Aisha Hasan, a principal product manager at Microsoft Digital. “That means a single pane where they can see what’s approved, who owns it, what data it touches, and then apply policy. We’re moving toward that consolidation so innovation continues while governance gets simpler and more consistent.”

We place guardrails where makers already work. In Copilot Studio, trusted and verified first-party MCP servers are allowed in developer environments to accelerate innovation and encourage experimentation. Riskier or complex MCP integration is available in Copilot Studio custom environments and other pro-code tools such as Microsoft 365 Agent Tool kit in VS Code and Microsoft Foundry, but only with clear checks: service ownership, security and privacy review, responsible AI assessment, and consent gating for high‑impact actions.

The allowlist is our north star.

Approved MCP servers and connectors live in one catalog with documented owners, scopes, and data boundaries. Makers choose from that shelf. If an MCP server uses an unverified tool, we enforce endpoint filtering. If there is misconfiguration, we open a task for the owner and help them build securely.

Permissions stay tight without adding cognitive load. Tokens are short‑lived and scoped to the task. Context is trimmed so only the necessary fields flow to the tool. Third‑party servers never get the full transcript. If a connector’s capabilities change, the runtime compares the new “voice” to what we approved. MCP Clients should pause risky actions, notify the owner, and resume automatically once reviewed.

With agent inventory in Power Platform Admin Center and registry in Agent 365, admins get a clean view on which connectors are active, who owns them, what data they touch, and how often they’re called. Organization policies such as DLP and MIP can be enforced in a unified way , with a re‑review when capabilities change. The goal is simple: let builders innovate confidently and securely while maintaining security and compliance.

“MCP servers are powerful AI tools that enable agents to seamlessly integrate and interact with enterprise data and transform business workflows,” Hasan says. “That means the same enterprise data and governance principles are applied equally to MCP servers and other connectors. A robust inventory, an agile policy framework, and an automated workflow for enforcement are cornerstones for successfully governing agents at scale.”

Securing MCP at scale: Operating, monitoring, and enabling

Our work doesn’t stop at go‑live. Once an MCP server is in the catalog, we operate the conversation like a service: measurable, observable, and responsive. Identity and policy guard the front door, but runtime is where we prove the controls work without slowing anyone down.

In practice, operating MCP at scale comes down to four motions:

Observe every tool call end to end. We make the flow observable. Every tool call carries a correlation ID from client to gateway to server and back. Prompts, tool selections, authorization decisions, and resource access should belogged with consistent schemas. Golden signals—latency, errors, saturation—sit alongside safety signals like unexpected egress or edits without consent. Owners and security teams see the same dashboards.

Detect drift and abnormal behavior early. Detection lives close to the work. We flag abnormal tool patterns, spikes in write operations, burst traffic from new geographies, and context sizes that don’t fit a task. We continuously compare a tool’s “voice” at connect time to the approved version; drift automatically pauses risky actions and pings the owner. Cost controls double as guardrails, using rate limits and budgets to cap blast radius and surface runaway loops early.

Respond with precision instead of blunt shutdowns. Response is graded, not binary. We can block destructive actions and allow reads, or throttle a noisy client without killing the session. Kill switches exist at both the client and the gateway. Playbooks are pre‑approved and integrated into the consoles owners already use, and dry runs are part of muscle memory, so the first switch flip doesn’t happen during an incident.

We treat model behavior as part of operations. Content safety and prompt shields run in production, not just in tests. We pin model versions and watch for output drift after updates. If a model starts suggesting tools out of character, the owner gets paged with the exact prompts and calls that triggered it.

Telemetry respects privacy. Logs avoid sensitive payloads by default and mask what must pass through for forensics. Access is role‑based, retention follows policy, and audit readiness is designed in on day one.

Enable builders through templates, education, and reuse. Adoption and education run in parallel. Builders get templates that enable best practices: sample manifests with consent gates, CI checks for token scope and SBOMs, and gateway stubs with sane defaults. A “ten‑minute preflight” runs locally to verify contracts, test consent flows, and check egress before a pull request is opened. IDE lint rules catch common issues early.

“This is how we operate MCP at scale,” says Janardhanan. “Observe the conversation, detect drift early, respond with precision, and teach habits that make the right path the easy path. We run it like a product because that’s what it is.”

Measuring results and moving forward

This program has changed how we build. Reviews move faster because every server follows the same path. Drift is caught early because clients compare a tool’s “voice” on connection. Shadow servers decline as inventory fills in from endpoint, repo, IDE, and gateway signals. Reuse increases because teams can discover trusted servers instead of creating new ones. Incidents resolve faster with correlation IDs across the conversation and kill switches at both the client and the gateway.

It’s also changed how our admins work. One gateway means one perimeter to manage. Policies land once and apply everywhere. Owners see the same telemetry security sees, so fixes happen where the work happens.

Going forward, we’re focused on more consolidation and automation. We’re moving toward a single pane for MCP governance—approve, monitor, and pause from one place. Policy-as-code will keep allowlists, consent rules, and egress boundaries versioned and testable in CI.

Our preflight checks will get smarter, with stronger injection tests, automatic egress validation, and environment‑aware templates. We’ll expand consent patterns so high‑impact actions remain explicit and auditable, even across multi‑tool chains. And we’ll keep shrinking re‑review time, so drift is measured in minutes, not days.

AI conversations are now part of how we build every day. MCP standardizes how agents talk to tools and data. Secure‑by‑default architecture, rigorous vetting, and a living inventory, ensure the right voices stay in the room, only what’s needed is shared, and drift is caught early.

The result is simple: teams ship faster with fewer surprises, and governance stays visible without getting in the way. We’ll keep tightening the loop, so saying yes remains both easy and safe.

Key takeaways

If you’re implementing MCP security, consider these key actions to ensure secure, efficient adoption in your organization:

  • Build governance into the maker flow. Embed security, consent, and responsible AI checks directly where teams build—so protection shows up by default, not as an afterthought.
  • Maintain a single allowlist and catalog. Centralize approved MCP servers and connectors with clear ownership, scope, and data boundaries.
  • Enforce scoped, short-lived permissions by default. Automatically limit token scope and duration to minimize risk and exposure.
  • Monitor continuously and detect drift early. Observe activity, flag deviations, and pause risky actions until reviewed and approved by owners.
  • Automate incident response and controls. Leverage pre-approved playbooks, kill switches, and rate limits for fast, precise action.
  • Design for privacy and auditability from day one. Mask sensitive data, restrict log access by role, and endure audit readiness.
  • Promote education and reuse. Provide templates, training, and feedback loops to encourage safe development and adoption of trusted servers.

The post Protecting AI conversations at Microsoft with Model Context Protocol security and governance appeared first on Inside Track Blog.

]]>
22324
Accelerating our cultural transformation at Microsoft with Viva and AI http://approjects.co.za/?big=insidetrack/blog/accelerating-our-cultural-transformation-at-microsoft-with-viva-and-ai/ Thu, 22 Jan 2026 17:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=21873 We’re learning a lot from integrating AI into our use of Microsoft Viva here at Microsoft, lessons we want to share with you. Microsoft Viva is a digital employee-experience platform that brings together essential capabilities such as knowledge, learning, and workplace insights in the flow of work to empower people and teams to be their […]

The post Accelerating our cultural transformation at Microsoft with Viva and AI appeared first on Inside Track Blog.

]]>
We’re learning a lot from integrating AI into our use of Microsoft Viva here at Microsoft, lessons we want to share with you.

Microsoft Viva is a digital employee-experience platform that brings together essential capabilities such as knowledge, learning, and workplace insights in the flow of work to empower people and teams to be their best.

Powered by Microsoft 365 Copilot and AI, Viva enables our employees—and anyone who uses it—to hone their skills and acquire new ones. It gives our managers data-driven insights that help them make better decisions. And with new Copilot Analytics integration, Viva is enabling leaders to measure and optimize the impact of AI across the organization. Ultimately, Viva helps us—and organizations like yours—build inclusive, thriving cultures where people can achieve their full potential.

Our team, Microsoft Digital, in partnership with Microsoft Human Resources (HR) and our leadership, deployed Viva across Microsoft to accelerate our evolving growth-mindset culture and help ensure that our employees thrive.

Supporting employee engagement

Our mission in Microsoft Digital is to power, protect, and transform Microsoft. Part of that responsibility is ensuring that Microsoft employees can thrive in a flexible hybrid work environment. In partnership with HR, our internal business partner and customer, our team obsesses over every dimension of an employee’s experience, from early on when they are a candidate for employment to when they become an alumnus of the company. We steward employees’ digital experience across all aspects of their work, ensuring that they have the devices, applications, services, and infrastructure they need to be productive on the job, regardless of what they do or where they do it.

Continuing to improve the experience our employees have with Viva requires a steady company-wide effort to build awareness of improvements being made on the platform, including the integration of Copilot and AI, and to drive usage. It’s an effort that’s both technical and cultural, driven by Microsoft HR in partnership with our team and the Viva product group.

Evolving our culture

Microsoft HR is an important driver of organizational culture at Microsoft, helping our employees thrive in a growth mindset culture with a focus on being customer-centered, diverse and inclusive, and unified as One Microsoft. Microsoft leadership and the HR team sponsor and advocate for Viva as our internal experience platform.

The HR team’s collective expertise, developed in HR centers of excellence, has been a key influence on Viva’s development and implementation. HR centers of excellence are groups of experts and leaders in areas such as culture, talent management, people analytics, learning, and other people practices. They help drive company-wide HR programs and processes using evidence-based research and external benchmarks. HR teams serve as experts in the employee life cycle, providing insights into core HR functions such as onboarding, wellbeing, recruiting, and career growth.

From the start, our teammates in HR have been Viva advocates, always on the lookout for opportunities to streamline work using Viva for existing HR programs and processes and recommending new HR employee-experience scenarios that shape the future of Viva product design. As our team in Microsoft Digital and HR deployed and drove adoption of Viva internally, we found new inspiration for product features and improvements.

Adopting Viva as Customer Zero

At Microsoft Digital, we are early adopters of our own technology. We believe in our products, and we obsess over making them better for our customers. This all happens within a culture and practice we think of as being Customer Zero for the company.

Customer Zero is our internal journey to make our products better using our own business experience. Across our company, we respect employee privacy, data-access regulations, and the laws of the countries and regions where we operate while using powerful tools to understand how work gets done. By acting as Customer Zero for Viva, our team in Microsoft Digital provides valuable feedback that enables the product team to develop features and experiences that further benefit our employees and our customers.

Throughout the Viva adoption process, our Customer Zero relationship has included the Viva product team—as they have developed new modules and features, we in Microsoft Digital and Microsoft HR have been their first customers.

Our Customer Zero approach requires involvement from all three contributors—Microsoft Digital, HR, and our product groups. This involvement creates dependency among the three contributors for successful deployment, adoption, and testing. It also creates a cycle of benefits that positively affects all three contributors and, ultimately, Microsoft’s customers:

  • Microsoft Digital. As the team responsible for implementing and supporting Viva within Microsoft, our team has direct access to the product group. This provides us with:
    • Access to preview features and capabilities on a controlled rollout schedule.
    • Direct support from the Viva product team for software updates and testing.
    • Support from Microsoft HR for driving cultural change and adoption and encouraging feedback from Microsoft employees.
  • Microsoft HR. Microsoft HR and Microsoft Digital technology teams have been partnering to drive Viva adoption to accelerate culture and business outcomes. This gives us:
    • The ability to guide feature development in Viva for specific use cases and scenarios within Microsoft.
    • The ability to contribute thought leadership in Viva’s product roadmap and overall design.
    • Assurance that the Viva product team is building a platform designed to meet business needs and evolve culture.
  • Microsoft product groups. Our product groups have in-place test and feedback environments that consider both technical and cultural perspectives. Practically, they get:
    • Enterprise testing conducted by our team in Microsoft Digital in real-life scenarios within the global Microsoft context. Input from Microsoft subject matter experts in crucial Viva subject areas such as employee experience, HR processes, learning, and knowledge management.
    • Access to existing tools and capabilities across Microsoft Digital and Microsoft HR tools that are already in place. We’ve developed a wide variety of apps and tools used for HR processes. The code and capabilities in these tools can be used in Viva modules and components.
    • A controlled feedback loop: As Customer Zero, Microsoft Digital and Microsoft HR directly communicate with the product group.

Customer Zero is our way of improving our products and services before we release them to our customers, and it reflects our commitment to making Viva—and all our enterprise apps and services—the best they can be, based on our own internal usage at Microsoft.

Enriching our employee experience

We’ve had great success driving usage and adoption of Viva across Microsoft through structured, globally relevant change-management activities. Enabling employees to thrive and be their best from anywhere by bringing knowledge, learning, resources, and insights together into the flow of work is always the central focus of our larger Viva adoption. At the same time, much of the practical implementation happened at the individual Viva module level, where each module supports culture evolution and employee experience at Microsoft.

Viva Engage

We use Viva Engage to build community with purpose across Microsoft—bringing our employees together across our organization to connect with our leaders, their coworkers, and our communities. It provides an experience that enables our employees to crowdsource answers and ideas, share their work and experience, and find belonging and connections at work. Engage also supports peer-to-peer knowledge sharing with dedicated Microsoft 365 AI adoption communities where employees can ask questions, get support from peers and administrators, and learn best practices for using Copilot effectively.

Viva Engage home feed

A screenshot of a typical Viva Engage interface where a new employee is welcomed to a new team.
Announcements are one of the many ways Engage is used to bring teams together.

Engage has become the primary platform we use for enterprise social communication at Microsoft, supporting large-scale campaigns such as Copilot boot camps that help employees build shared understanding of AI and develop confidence using it. These campaigns activate communities and strengthen leader visibility, helping leaders to engage employees at-scale. We also use the integration of Copilot with Engage to refine our posts and to suggest where employees can post their updates to maximize their effectiveness and reach their intended audience.

Viva Amplify

Viva Amplify empowers organizations’ communication teams and leaders to elevate their message and energize their people. The app centralizes communication processes in a single space and offers writing guidance to help messages from leaders, corporate communications, and HR to resonate with employees. Communicators can orchestrate messages across multiple channels, manage their campaigns from a single coordinated workspace, and use engagement insights to refine and improve future communications.

At Microsoft, we use Amplify to organize and streamline our internal communications workflows, centralizing campaign management while simplifying publishing and reporting. Amplify helps us ensure that messages land consistently across audiences by using AI to optimize messaging and measure engagement. Business leaders at Microsoft use Amplify and Engage to run large, structured campaigns—such as Copilot adoption efforts—that reach employees across regions and roles, meeting people where they are and helping build shared understanding at scale.

Viva Amplify overview

A screenshot shows the dashboard where a communicator would start when using Viva Amplify to send out messages across several platforms at once.
Amplify provides a streamlined UI to simplify the setup and management of message campaigns.

Viva Insights

Viva Insights is designed to guide organizations toward better work habits and norms to improve wellbeing and productivity. Viva Insights respects employee privacy while leveraging Microsoft 365 data to measure the day-to-day actions that contribute to our culture and success, like how employees use their time, their collaboration habits, and how they operate across team, business, and geographic boundaries. Viva Insights has also become a valuable tool for helping managers understand how employees are adopting AI.

Viva Insights dashboard

A screenshot showing a Viva Insights dashboard of Copilot adoption rates.
An Insights dashboard that helps leaders understand how their employees are adopting Copilot.

At Microsoft, we use Insights to promote a more productive workplace culture across all levels of the company using capabilities like Copilot Analytics, teamwork habits, and operational insights:

  • Copilot analytics: Copilot Analytics integration with Insights enables managers to observe how employees are engaging with AI technology and agents and offer actionable insights into adoption patterns and usage trends. This data helps leaders identify opportunities for further AI-driven innovation and support within their teams.
  • Teamwork habits: Insights promotes productive teamwork habits by using team-level insights to help managers maintain regular 1:1 personal interaction and keep up with outstanding tasks to unblock the team and recognize strengths and accomplishments.
  • Operational transformation: Insights helps managers and business leaders focus on streamlining operations and improving productivity, including such areas as meeting effectiveness and AI-driven improvements in process efficiency. Insights has helped us to measure the shift in meeting culture at Microsoft: thanks to AI-powered recaps and summaries, employees who aren’t central to a meeting can now catch up asynchronously, saving time and improving productivity.

By aggregating and evaluating this kind of data at the highest levels of the company, we’re able to use organizational trends to make changes that help us improve our employees’ experience while respecting individuals’ privacy.

Viva Glint and Viva Pulse

Glint and Pulse are voice-of-employee solutions that we use to transform feedback into action, at scale. The all backed by deep people-science rigor, Copilot-assisted insights, and native integrations across Microsoft 365.

We’re using these tools for multiple purposes while we navigate our AI transformation:

  • Benchmarked org-wide sentiment: Glint enables our functional leadership teams to assess org-wide employee sentiment and drive targeted action—including via Employee Signals, our twice-yearly HR-led employee sentiment survey, our annual global Communications community survey. This enables our leaders to compare how AI is affecting workflows across different teams, functions, and parts of Microsoft, rather than looking at one org in isolation.
  • Democratized local insights: Pulse enables our business leaders to send brief surveys in a local capacity, outside of the standardized org-wide programs. Our leaders use our Copilot templates to understand how their teams are getting the most value from AI—and where they still need help learning!
  • Strategic initiative tracking: For key transformations like Copilot adoption, Glint and Pulse integrate seamlessly into our broader, companywide change management efforts with native experiences easily available in Copilot Dashboard to gather feedback from employees on additional change management support needs, highlight breakthrough success stories, and understand how Copilot adoption is driving changes in employee experience

Surveying our employees

A screenshot of an Employee Signals dashboard.
We use Glint to survey our employees via Employee Signals.

Together, these tools help enable leaders throughout our organization keep up to date with how our employees are feeling as they navigate culture and technology transformations. This ensures our leaders get the critical, timely feedback needed to accelerate and achieve success in bringing their employees along in times of change.

Like Insights, Glint and Pulse include built-in privacy safeguards such as aggregation and differential privacy, so our HR teams can honor compliance obligations while gaining a better sense of the current status and needs of their organization and then share anonymized reports with business leaders.

Viva Learning

We’re using Viva Learning for high-value learning experiences, a part of which is creating a single front door for the wide variety of learning experiences available to Microsoft employees. Viva Learning is a centralized learning hub in Microsoft Teams that lets our employees seamlessly integrate learning and skill building into their day. With Learning, our teams can discover, share, recommend, and learn from content libraries provided by the organization and content recommended by peers. And they can do all of this without leaving Microsoft Teams.

Viva Learning modules

A screenshot of the Microsoft Copilot Academy in Viva Learning.
The Microsoft Copilot Academy is one of several Viva Learning learning modules that our employees can use to hone their skills in key areas.

At Microsoft, we use Viva Learning to:

Accelerate Copilot adoption with upskilling: Microsoft Copilot Academy provides our employees with structured, role-based learning paths and hands-on exercises that help them build their Copilot skills and confidently apply Microsoft 365 Copilot in their daily work.

Deliver organization-driven learning experiences: Viva Learning enables assigned compliance training and supports custom academies created by the organization, helping our employees improve their skills in their domains while aligning learning with business priorities.

Encourage peer-to-peer learning: Our employees can curate and share learning collections with peers, fostering collaboration and sharing knowledge across teams.

We also have launched a Learning Agent, which is currently in public preview for some customers. It delivers AI-powered, personalized learning experiences to our employees that complement what their Viva Learning experience. It helps our employees discover tailored content and accelerate skill development.

Bigger picture, Viva Learning reduces learning-resource isolation, and it provides our employees with a single portal to discover opportunities to build their skills and manage required training. Consolidating these experiences into a single environment, where learning can be discovered and shared within the flow of work, is a significant value of Viva Learning for us here at Microsoft.

AI-based content recommendations and peer-recommended learning enable our culture of learning, encouraging our employees to be full-time, lifelong learners in whatever areas they choose to pursue. Together, these capabilities support continuous learning at scale, empowering employees to upskill themselves, adapt, and contribute to business outcomes.

What’s next

At Microsoft, we’ve achieved over 97 percent employee usage across Viva as a suite, and adoption continues to grow. As we continue to drive additional usage, we’re providing our product teams with insights that help improve the experience for our customers. Based on our own usage and feedback from our customers and partners, we continue to help the product team build out new capabilities that deliver additional value and help ensure that Viva is the centralized, digital platform for staying productive, connected, and supported in the hybrid workplace.

Our employee-experience evolution with Viva has been underway now for over five years, but we aren’t finished. We’re continuing to refine the capabilities, and we’re committed to improving Viva for our customers and for our employees, and we look forward to sharing our future innovations with you as Customer Zero.

Key takeaways

As you evaluate the employee experience and learning management features of Viva and its newest AI-powered capabilities, here are some practical steps you can take to ensure you get all the benefits it has to offer:

  • Use AI-powered analytics to optimize employee engagement: Surface actionable recommendations, automate routine tasks, and deliver tailored learning and wellbeing experiences that meet employees where they are.
  • Centralize communications and feedback with modern platforms: Adopt solutions such as Viva Amplify to streamline campaign management, gather real-time feedback, and ensure consistent, impactful messaging across your organization.
  • Align organizational goals and employee development with strategic business outcomes: Use AI-enhanced tools like Viva Learning to connect daily work to broader objectives and support continuous upskilling.
  • Foster a culture of collaboration, inclusion, and knowledge sharing: Empower employees to connect, share expertise, and build community through platforms like Viva Engage, using its AI features to break down silos and amplify voices across the organization.
  • Adopt a “Customer Zero” mindset to drive continuous improvement: Pilot new technologies internally, gather feedback, and iterate quickly, using your own organization as a testbed to ensure solutions are effective before broader rollout.
  • Measure impact and adapt using data-driven insights: Track adoption, engagement, and business outcomes with AI-powered analytics, and use these insights to refine strategies and maximize the return on your digital transformation investments.

The post Accelerating our cultural transformation at Microsoft with Viva and AI appeared first on Inside Track Blog.

]]>
21873
Supercharging our enterprise with Windows 11 and AI PCs http://approjects.co.za/?big=insidetrack/blog/supercharging-our-enterprise-with-windows-11-and-ai-pcs/ Tue, 18 Nov 2025 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20794 AI is no longer a buzzword—it’s the engine driving a new era of productivity, security, and personalization. And Windows 11 and AI PCs are at the center of it. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic with experts […]

The post Supercharging our enterprise with Windows 11 and AI PCs appeared first on Inside Track Blog.

]]>
AI is no longer a buzzword—it’s the engine driving a new era of productivity, security, and personalization. And Windows 11 and AI PCs are at the center of it.

At Microsoft Digital, the company’s IT organization, we’re embracing this as Customer Zero for the company.

What does that mean?

It means that we’re testing and shaping new Windows 11 features before they ship to customers. And as such, we’re helping the company reimagine what the OS can do for enterprise users in an AI-first world. We’re also helping the company transform the tools and processes we and our customers use to manage the Windows devices that our employees use to do their work.

MacDonald appears in a photo

“Windows 11 is our foundation for the future of work. We’re helping to build an OS that’s not just reactive—it’s predictive. It understands context, adapts to users, and helps IT teams stay ahead of the curve.”

Sean MacDonald, partner director of product management, Microsoft Digital

When we rolled out Windows 11 across Microsoft in 2021, we wanted to modernize the Windows experience for our global workforce. That meant moving beyond the legacy of Windows 10 and building a platform that’s smarter, more secure, and easier to manage. It also meant working closely with engineering teams to ensure that what we deploy internally reflects what customers need externally.

“Windows 11 is our foundation for the future of work,” says Sean MacDonald, partner director of product management at Microsoft Digital. “We’re helping to build an OS that’s not just reactive—it’s predictive. It understands context, adapts to users, and helps IT teams stay ahead of the curve.”

This transformation isn’t happening in isolation. It’s part of a broader organizational commitment to AI across Microsoft. From the integration of Copilot into dozens of Microsoft products to intelligent device management, we’re aligning every layer of the stack to deliver smarter experiences.

And we’re doing it because the time is right. The end of Windows 10 support is here, and Windows 11 is the essential solution for organizations seeking the enhanced productivity, security, and personalized experiences that AI makes possible.

Embracing a secure and efficient update environment

Keeping Windows 11 secure and up-to-date has evolved into a streamlined, intelligent process.

With Windows Autopatch, we’ve automated the deployment of updates across our enterprise.

But automation doesn’t mean losing control. The management tools available across Microsoft Intune and Windows allow us to exercise complete control over updates. We can leave Autopatch to make patching decisions, or we can dictate how any part of the process works—evaluate and select which updates to perform, define the rollout structure and schedule, and monitor the updates.

A photo of Rodriguez

“Autopatch update readiness takes us to a new level with Windows 11 updates. It allows us to be proactive, rather than reactive in ensuring our Windows devices are in a ready state to seamlessly update, which minimizes disruptions and distractions to our employees.”

Dave Rodriguez, principal product manager, Windows team, Microsoft Digital

Autopatch lets us tailor rollouts to match our business structure. We’ve created custom Autopatch groups of up to 50 rings so we can deploy updates to the right people at the right time.

This flexibility is critical. It means we can schedule around sensitive periods like year-end close, define grace periods, and even choose which updates to deploy—feature, driver, or quality.

But the real magic happens behind the scenes.

With Windows 11 and Autopatch, we’re not just reacting to issues—we’re anticipating them. That’s where the Autopatch update readiness (AUR) comes in. It adds a new layer of resilience to our update management strategy.

Update readiness continuously monitors device health and update compliance across the enterprise.

By analyzing real-time telemetry, update readiness flags irregularities early and recommends targeted fixes.

“Autopatch update readiness takes us to a new level with Windows 11 updates,” says Dave Rodriguez, a principal product manager on the Windows team in Microsoft Digital. “It allows us to be proactive, rather than reactive in ensuring our Windows devices are in a ready state to seamlessly update, which minimizes disruptions and distractions to our employees.”

“Hotpatching has been a game-changer for keeping our devices secure without disrupting work. Security updates take effect immediately—no reboot required. That’s a big deal.”

Harshitha Digumarthi, senior product manager, Windows team, Microsoft Digital

One of the biggest wins?

Hotpatch, which allows us to apply most of our monthly security updates without our employees needing to restart their devices, which has been huge for our productivity.

“Hotpatching has been a game-changer for keeping our devices secure without disrupting work,” says Harshitha Digumarthi, a senior product manager on the Windows team in Microsoft Digital. “Security updates take effect immediately—no reboot required. That’s a big deal.”

Hotpatch works by modifying in-memory code to silently apply updates in the background. It’s especially valuable for operations that require high availability.

A photo of Markus Gonis

“We’re seeing a shift from device-centric recovery to user-centric personalization. It’s not just about getting the machine back—it’s about getting the person back to work.”

Markus Gonis, senior service engineer, Microsoft Digital

Together, hotpatch, update readiness, and Autopatch are helping us transform how we manage updates. We’re not just deploying tools—we’re reshaping business critical processes.

Protecting data using Windows Backup and Restore for Organizations

With Windows 11, we’ve redefined what backup and restore means for enterprise users with Windows Backup and Restore for Organizations. It’s not just about getting a device back online—it’s about restoring the user’s experience.

When a user signs into a new device with their Entra ID, they can select a backup to automatically restore their Microsoft Store app configurations, settings, and preferences. It’s seamless. It’s secure. And it’s fast.

“We’re seeing a shift from device-centric recovery to user-centric personalization,” says Markus Gonis, a senior service engineer on the Windows team in Microsoft Digital. “It’s not just about getting the machine back—it’s about getting the person back to work.”

This matters. Especially in large organizations where device turnover is constant and downtime is costly.

With Entra ID, we can automatically enroll devices into Microsoft Intune for management. That means IT policies, security configurations, and compliance settings are applied instantly. No manual setup. No waiting.

And because the restore process is tied to the user’s identity, it works across devices. Whether it’s a laptop refresh, a lost device, or a hardware upgrade, users get their familiar environment back—apps, layout, even their desktop background.

“Windows 11 is designed for fast deployment and compatibility,” Gonis says. “We’ve seen up to 25 percent faster deployment times compared to Windows 10. That’s a huge win for IT teams.”

This isn’t just about convenience. It’s about resilience.

By combining Entra ID with modern device management, we’ve built a recovery system that’s secure by default. Data is encrypted. Access is conditional. And IT retains full control over who can restore what, when, and where.

Capturing the power of AI-enabled apps and experiences

Windows 11 is bringing intelligent experiences to the forefront, and we’re seeing it firsthand at Microsoft Digital. From productivity to security, AI is transforming how our people work.

Windows Recall is an opt-in AI-powered feature built directly into Copilot+ PCs with Windows 11. It’s designed to solve a problem every person knows too well: Finding something you’ve already seen.

Recall allows you to search across time to find the content you need. Just describe how you remember it, and Recall retrieves the moment you saw it. Once opted-in snapshots are taken periodically while content on the screen is different from the previous snapshot. The snapshots of your screen are organized into a timeline. Snapshots are locally stored and locally analyzed on your PC. Recall’s analysis allows you to search for content, including both images and text, using natural language.

Here are its core capabilities:

  • Semantic AI-powered search. No need to recall exact filenames. Just describe what you remember—like “blue sustainability slide from last meeting”—and Recall uses on-device AI to surface images or text that match the description.
  • Full user control and privacy. IT admins have a full set of controls to manage security and privacy when enabling the Recall feature for the enterprise. Once enabled by enterprise admins, you as the end user then have the choice to opt in to enable snapshots on your machines.
  • Explore content with a visual timeline. Recall periodically captures screenshots of your active window and displays them in an interactive, chronological timeline. When you need to revisit something, you can simply scroll through your past activity or jump directly to the specific moment you remember seeing it.
  •  Granular snapshot management. You choose which apps and websites to include or exclude. You can pause snapshot capture, delete past captures, and set retention limits (e.g., 30, 60, 90, or 180 days) to manage storage and privacy. And IT admins can control how these capabilities work for the entire organization.
  • All snapshots, indexing, and AI processing occur on-device. Recall runs completely locally—no data leaves your PC.It never shares your data with Microsoft or third parties, nor across different user accounts on the same device.

Recall doesn’t just remember—it protects. IT admins can control snapshot storage, retention policies, and even filter which apps and websites are recorded.

That’s where enterprise-scale controls come in.

A photo of Philpott.

“We helped define these controls. We tested them to validate they worked as expected.”

John Philpott, principal product manager at Microsoft Digital

Microsoft Digital partnered with the Purview and Intune product teams to help build a rich set of controls that give IT full visibility and governance over Recall’s data store. That includes sensitivity labels, data loss prevention (DLP) policies, and tenant trust reviews—all designed to keep enterprise data safe.

Purview and Intune provide the level of control that IT admins need to ensure that Recall respects the security and privacy concerns of the enterprise and the end user.

If a document is labeled “Highly Confidential,” Recall won’t index it. If a meeting is tagged “Recipients Only,” it won’t be captured. Purview admins can decide exactly which sensitivity levels are allowed in Recall and which are excluded.

Recall’s content redaction feature automatically detects and removes highly confidential information from screen snapshots based on Purview sensitivity labels. Users can work with both sensitive and non-sensitive documents on the same screen without risk of accidental exposure.

“We helped define these controls,” says John Philpott, a principal product manager within Microsoft Digital. “We tested them to validate they worked as expected.”

Implementing Windows 11 for the enterprise

Windows 10 support officially ended on October 14, 2025. Still, many companies have not yet made the needed move, something that Microsoft would like them to do as soon as possible.

At Microsoft Digital, we’ve already made the leap. We’ve deployed Windows 11 across our internal fleet, and we’ve learned what works and what doesn’t.

The most important thing? Have a plan and a phased approach.

“We didn’t try to do everything at once,” Digumarthi says. “We went slow, monitored help desk calls, and paused when needed. It wasn’t about speed—it was about getting it right.”

That phased approach helped us avoid surprises. We used security groups to segment users, deployed in waves, and ran parallel communication campaigns to keep everyone informed. “We built tech web pages, sent individual emails, and used Viva Engage for direct outreach,” Gonis says. “We wanted users to know what was coming and why.”

Organizations have options. They can upgrade to Windows Pro to Windows Enterprise. They can subscribe to Windows 365, which provides access to Windows 11 in the cloud. And they can extend the life of Windows 10 devices with Extended Security Updates (ESU).

Windows 365 lets you keep older hardware while giving users a modern experience. You get ESUs at no extra cost, and you don’t have to manage license keys or deploy images.

With tools like Autopatch and Intune, deployment is faster and easier. Compatibility is strong. And support is built in.

Looking ahead

We’re just getting started.

At Microsoft Ignite, we’re unveiling new capabilities that push the boundaries of what’s possible with AI and automation. Expect deeper integration between Windows and Microsoft Defender, new agentic workflows, and expanded support for AI-driven security operations.

We’re expanding the update readiness initiative, introducing carbon-aware updates in Autopatch, and expanding privacy capabilities in Recall.

Baseline Security Mode is growing, too, with more features, better reporting, and stronger baselines coming soon.

And we’ll keep telling the story. Start with the tools. Lean on the community. And let us help you make the leap to a more intelligent and secure enterprise powered by AI and Windows 11.

Key takeaways

Here are several practical steps you can take right now to maximize your transition to Windows 11 and harness the full potential of its AI-powered capabilities:

  • Understand Windows 11’s AI-driven transformation. Learn how Windows 11 leverages artificial intelligence to enhance productivity, security, and user experiences across your organization.
  • Discover new enterprise features and deployment strategies. Explore the latest tools and best practices for rolling out Windows 11 efficiently, including advanced management and security capabilities tailored for businesses.
  • Learn from Microsoft Digital’s role as Customer Zero. Benefit from Microsoft Digital’s firsthand insights and lessons learned as the initial adopter of Windows 11 within a large enterprise environment.
  • Explore migration options. Review your choices for upgrading to Windows 11, such as moving to Windows 11 Pro or Enterprise, subscribing to Windows 365, or leveraging Extended Security Updates for legacy devices.
  • Prepare for what’s next. Stay ahead by planning for upcoming features, security enhancements, and innovations that will continue to shape the future of Windows in the enterprise.

The post Supercharging our enterprise with Windows 11 and AI PCs appeared first on Inside Track Blog.

]]>
20794
Accelerating workplace productivity at Microsoft with Windows Recall http://approjects.co.za/?big=insidetrack/blog/accelerating-workplace-productivity-at-microsoft-with-windows-recall/ Tue, 18 Nov 2025 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20804 Have you ever struggled to find an important document or photo? Forgotten which app a colleague shared an important data point with you on? Browsed a website but forgot to bookmark it? Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this […]

The post Accelerating workplace productivity at Microsoft with Windows Recall appeared first on Inside Track Blog.

]]>
Have you ever struggled to find an important document or photo? Forgotten which app a colleague shared an important data point with you on? Browsed a website but forgot to bookmark it?

Recall on Copilot+ PCs can help. It uses whatever details you remember about the missing item to find it for you.

Our team in Microsoft Digital, the company’s IT organization, has deployed Recall, giving our employees access to its AI-powered memory in a secure and managed environment. Recall now integrates with Microsoft Purview, which layers enterprise-grade security and compliance controls on top of Recall’s local AI capabilities.

How Windows Recall works

Windows Recall is an AI-powered feature built directly into Copilot+ PCs with Windows 11. It’s designed to solve a problem every person knows too well: Finding something you’ve already seen.

Here are its core capabilities:

  • Explore content with a visual timeline. Recall captures periodic screenshots of your active window and visualizes them in an explorable, chronological timeline. When you need to revisit something, you can scroll through your activity or jump straight to the moment you remember seeing it.
  • Semantic AI-powered search. No need to recall exact filenames. Just describe what you remember—like “blue sustainability slide from last meeting”—and Recall uses on-device AI to surface images or text that match the description.
  • Full user control and privacy. IT admins have a full set of controls to manage security and privacy when enabling the Recall feature for the enterprise. Once enabled by enterprise admins, you as the end user then have the choice to opt in to enable snapshots on your machines. Only your device stores them, and they’re encrypted locally via BitLocker or Device Encryption. Access requires Windows Hello biometrics (your face or fingerprint), which ensures only you can view them.
  •  Granular snapshot management. You choose which apps and websites to include or exclude. You can pause snapshot capture, delete past captures, and set retention limits (e.g., 30, 60, 90, or 180 days) to manage storage and privacy. And IT admins can control how these capabilities work for the entire organization.
  • All snapshots, indexing, and AI processing occur on-device. Recall runs completely locally—no data leaves your PC.It never shares your data with Microsoft or third parties, nor across different user accounts on the same device.
  • Jumping back in. Windows Recall doesn’t just help you find something you saw before, it helps you pick up where you left off, getting right back to the page, slide, or chat in Word, Excel, PowerPoint, and Teams, as well as in an app, document, or webpage.

It’s like having a photographic memory for your digital life. Recall is a productivity booster. But it’s also a security-first, enterprise-ready feature.

A photo of Wayment.

“We’ve been working for over a year with Microsoft Digital to understand how Windows Recall will function best in the enterprise environment. They helped us get it ready for our customers.”

Adam Wayment, principal product manager, Windows product team

To ensure security, privacy, and governance, the Windows product team turned to our team in Microsoft Digital, the company’s IT organization, to test Windows Recall. This happened after early users of the feature suggested that better controls needed to be put in place. Our team helped the product group design and deploy better enterprise controls.

This collaboration helped shape Recall into a feature that works for everyone—from individual users to global enterprises.

“We’ve been working for over a year with Microsoft Digital to understand how Windows Recall will function best in the enterprise environment,” says Adam Wayment, a principal program manager lead for Windows Recall. “They helped us get it ready for our customers.”

Establishing security and privacy for the enterprise

Recall doesn’t just remember what you’ve seen. It remembers what it should—and forgets what it shouldn’t.

That’s where enterprise-scale controls come in.

Comprehensive controls are at the center of deploying Recall to the enterprise.

Microsoft Digital partnered with the Purview and Intune product teams to help build a rich set of controls that give IT full visibility and governance over Recall’s data store. That includes sensitivity labels, data loss prevention (DLP) policies, and tenant trust reviews—all designed to keep enterprise data safe.

Purview and Intune provide the level of control that IT admins need to ensure that Recall respects the security and privacy concerns of the enterprise and the end user.

A photo of Philpott.

“We helped define these controls. We tested them to validate they worked as expected.”

John Philpott, principal product manager at Microsoft Digital

If a document is labeled “Highly Confidential,” Recall won’t index it. If a meeting is tagged “Recipients Only,” it won’t be captured. Purview admins can decide exactly which sensitivity levels are allowed in Recall and which are excluded.

That means no screenshots of HR portals. No copies of credentials. No risk of sensitive data lingering on a user’s device.

Recall’s content redaction feature automatically detects and removes highly confidential information from screen snapshots based on Purview sensitivity labels. Users can work with both sensitive and non-sensitive documents on the same screen without risk of accidental exposure. Only permitted content is captured during multitasking or collaborative activities. That Excel document with employee salary information? It never becomes part of the snapshot.

IT admins also have policy controls to manage access to Recall. They can set retention limits. They can restrict access by role, ensuring Recall is only available to the right people. And they can block specific apps and websites from being captured.

“We helped define these controls,” says John Philpott, a principal product manager within Microsoft Digital. “We tested them to validate they worked as expected.”

“Security is at the center—data is encrypted on the device. Recall uses the latest technology for security, from all the controls on the backend right up to user authentication, including Windows Hello with face or fingerprint recognition required to access the data.”

Adam Wayment, principal product manager, Windows product team

This wasn’t just about building features. It was about building trust.

We worked to identify the key scenarios and apps—including Word, Excel, PowerPoint, Outlook, Teams, and Edge—to prioritize what needed protection. We made sure Recall could handle the real-world complexity of enterprise data.

It was a massive undertaking, requiring collaboration between Microsoft Digital, the Recall product team, and the products teams from all the apps with which Recall interacts. It came down to creating useful functionality while protecting our data.

“Security is at the center—data is encrypted on the device,” Wayment says. “Recall uses the latest technology for security, from all the controls on the backend right up to user authentication, including Windows Hello with face or fingerprint recognition required to access the data.”

These controls were built in collaboration with the product team, with our Microsoft Digital team acting as Customer Zero. We helped define tenant trust requirements and test every scenario—credentials, certificates, internal portals, and more. And now Recall is stronger because of it.

Moving forward

Our team in Microsoft Digital learned a lot helping the Windows product team build and test Recall.

Some lessons were technical. Some were strategic. All of them made the product better.

One of the first challenges we tackled was credential protection. We wanted to make sure passwords, certificates, and other sensitive data wouldn’t be captured. The product team agreed, and we helped them build the exclusion logic that ensures Recall ignores credential-related content.

Another lesson came from deployment.

Recall is disabled by default in enterprise builds. That meant we had to work through IT policy hurdles to get it up and running. We hit race conditions. We found bugs. But we fixed them. And we made the deployment smoother for everyone.

We also learned the value of centering enterprise needs early in the deployment.

When Recall first launched, we focused on consumers. But customer feedback reinforced how powerful the tool could be for information workers in enterprises like ours. We built tenant trust requirements. We ran evaluations. We created a checklist of what needed to be done. And we did it.

That process changed the conversation, and we’re not done. We’re still listening, still improving, still building.

Key takeaways

Here are four actions you can take right away as you consider deploying Windows Recall in your organization:

  • Test at scale. Roll out Windows Recall to a wide group to uncover complex issues—especially those that don’t show up in smaller test environments.
  • Start with enterprise needs and roles. Engage enterprise stakeholders early review which roles should have access and shape feature requirements such as tenant trust and data-handling policies.
  • Collaborate for improvement. Test controls early to ensure that they are configured to provide the level of security and privacy required by your organization.
  • Build confidence for adoption. Use thorough evaluations and checklists to ensure readiness, leading to greater trust among users, partners, and teams.

The post Accelerating workplace productivity at Microsoft with Windows Recall appeared first on Inside Track Blog.

]]>
20804
Hardening our digital defenses with Microsoft Baseline Security Mode http://approjects.co.za/?big=insidetrack/blog/hardening-our-digital-defenses-with-microsoft-baseline-security-mode/ Tue, 18 Nov 2025 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20811 Security isn’t just a feature—it’s a foundation. As threats grow more varied, widespread, and sophisticated, enterprises need to rethink how they protect their environments. That’s why we, in Microsoft Digital, the company’s IT organization, took a necessary step forward and deployed Microsoft Baseline Security Mode internally across the company. Engage with our experts! Customers or […]

The post Hardening our digital defenses with Microsoft Baseline Security Mode appeared first on Inside Track Blog.

]]>
Security isn’t just a feature—it’s a foundation.

As threats grow more varied, widespread, and sophisticated, enterprises need to rethink how they protect their environments. That’s why we, in Microsoft Digital, the company’s IT organization, took a necessary step forward and deployed Microsoft Baseline Security Mode internally across the company.

Baseline Security Mode is a new approach to endpoint protection that enforces secure-by-default configurations across our enterprise. And it’s not just about locking things down—it’s about doing so in a way that’s scalable, manageable, and respectful of user experience.

This is a story for every organization trying to balance usability with security. Baseline Security Mode is designed to help IT teams enforce protections without breaking productivity. It’s a shift toward proactive defense with standardized secure settings.

Understanding the need for Microsoft Baseline Security Mode

Security must evolve with the environment.

At Microsoft Digital, we’ve built a strong foundation of endpoint protection over the years. But as our ecosystem expanded—more devices, more workloads, more diverse user needs—we saw an opportunity to take our security posture to the next level.

Our existing configurations were effective, but they reflected the natural complexity of a large enterprise. Different teams had different requirements. Some relied on legacy technologies that had served them well. Others needed flexibility to support specialized workflows. Over time, this led to variation in how security policies were applied.

We wanted to unify that approach.

Baseline Security Mode emerged as a way to streamline and strengthen our defenses. It was about building on what worked. We started by identifying areas where legacy protocols and configurations could be modernized. That included technologies like ActiveX controls and older authentication flows, which we carefully evaluated and phased out where appropriate.

We also improved how we gather and use telemetry. Initially, we had limited visibility into how certain features were used. That made it harder to predict the impact of changes. So, we ran pilots, collected feedback, and refined our approach. Baseline Security Mode was a game changer here, providing built-in reports that gave us the visibility we needed to observe the impact of applying settings in our environment. For example, when we reviewed blocking legacy file formats, we discovered that some workflows depended on them. We responded quickly, offering alternatives and guiding users through the transition.

Ease of use was a priority.

We built intuitive controls into the Microsoft 365 admin center, allowing IT admins to manage policies with just a few clicks. No more manual scripts. No more guesswork. We also introduced exception handling to support specialized needs, ensuring that security didn’t come at the cost of productivity.

We worked closely with internal stakeholders, including compliance teams and work councils, to validate every step and build trust. We made sure the experience was smooth, the tools were reliable, and the changes were clearly communicated.

This wasn’t just a technical upgrade—it was a cultural shift.

Baseline Security Mode gave us a way to unify our security posture while honoring the diversity of our environment. It’s a smarter, more scalable way to protect our endpoints, and it reflects everything we’ve learned from years of experience.

Putting consistent security configuration into practice

Baseline Security Mode establishes a new standard, enabling organizations to be secure by default.

It is the result of a collaborative effort of multiple product teams at Microsoft, building on their security and incident-handling expertise.  It’s designed to simplify and strengthen endpoint protection across Windows and Microsoft 365. The feature lives in the Microsoft 365 admin center, where IT admins can enforce modern security policies with just a few clicks.

“When we blocked certain file formats, users were confused by the error messages and thought they were blocked from saving the file. So, we ran pilots, gathered feedback, and helped the product team build an improved error experience to save blocked formats to safe, newer formats.”

Harshitha Digumarthi, senior product manager, Microsoft Digital

The product teams delivered 20 features across five workloads: Office, OneDrive and SharePoint, Teams, Substrate, and Identity. Each one targets a specific risk—blocking legacy authentication, disabling insecure protocols, restricting ActiveX, and more.

When we deployed Baseline Security Mode as Customer Zero at Microsoft Digital, our job was to validate these features and controls in real-world enterprise conditions.

We pushed for exception handling.

Some users still relied on legacy formats or protocols. Certain teams, for example, needed access to older Office features. So, we worked with the product team to ensure exceptions could be built into the UI.

That flexibility was key. We knew from experience that without it, customers might hesitate to adopt the feature.

“When we blocked certain file formats, users were confused by the error messages and thought they were blocked from saving the file,” says Harshitha Digumarthi, a senior product manager at Microsoft Digital. “So, we ran pilots, gathered feedback, and helped the product team build an improved error experience to save blocked formats to safe, newer formats.”

We also pushed for better telemetry.

A photo of Gonis.

“When we heard about Baseline Security Mode, it was still in ideation. There were no tools in the Microsoft 365 admin center yet. We had to figure out how to enable this internally while the product team built the capabilities in parallel.”

Markus Gonis, senior service engineer, Microsoft Digital

At first, we had only a few days of data. That wasn’t enough to understand how features were used or what impact they would have. So we worked with the product team to expand telemetry, improve error reporting, and reduce false positives, including identifying bugs that skewed metrics and made troubleshooting harder.

We ran the deployment through our Tenant Trust Program and work council reviews to ensure global compliance. That gave us—and our customers—confidence.

Baseline Security Mode isn’t just a feature. It’s a shift in how we think about security, and we’re proud to have helped shape it.

Deploying Baseline Security Mode at Microsoft Digital

Rolling out Baseline Security Mode wasn’t just a technical exercise—it was a cross-team effort that demanded precision, patience, and partnership.

Microsoft Digital took the lead on deployment. We acted as Customer Zero, testing every feature in real-world conditions before it reached customers. That meant working closely with the product team to validate functionality, identify bugs, and shape the user experience.

“When we heard about Baseline Security Mode, it was still in ideation,” Gonis says. “There were no tools in the Microsoft 365 admin center yet. We had to figure out how to enable this internally while the product team built the capabilities in parallel.”

Telemetry was limited. We had only 30 days of data to work with. That made it hard to predict how changes would affect users, so we ran pilots with internal user acceptance testing cohorts and we deployed in phases.

Philpott appears in a photo.

“It was a great Customer Zero experience. Our security teams stood to benefit from Baseline Security Mode features, and we helped the product team find bugs and the issues that just hadn’t come up in early testing or at a large scale. It was a win-win situation”

John Philpott, principal product manager at Microsoft Digital

For some legacy protocols, usage was low. In these cases, the features being deployed made removing these protocols seamless. Where usage was higher or unclear, a more detailed approach was required.

First, a few thousand users. Then 50,000. Then 100,000. Eventually, the entire Microsoft tenant. We paused between each wave to monitor help desk tickets, gather feedback, and confirm that our mitigation strategies were working.

Communication was critical.

We ran targeted campaigns, sent individual emails, and published technical reports explaining what was changing, why it mattered, and how users could adapt. We even used Viva Engage to notify users directly. It was important to explain to users why longstanding functionalities were being removed. We had to explain what we were doing and how to mitigate any impact.

We did a lot of work with the product team to ensure the user experience and the IT pro experience both exceeded expectations.

“It was a great Customer Zero experience,” says John Philpott, principal product manager within Microsoft Digital. “Our security teams stood to benefit from Baseline Security Mode features, and we helped the product team find bugs and the issues that just hadn’t come up in early testing or at a large scale. It was a win-win situation.”

We flagged inconsistencies in policy syntax, pushed for better error handling, and worked with the product team to align deployment tools across workloads.

But we didn’t stop at deployment. We tracked progress, validated telemetry, and signed off on each feature before it moved into broader rollout. We even helped pave the way for the next iterations, identifying features that needed more design work or deeper telemetry before they could be deployed.

This was a true partnership. The product team built the features. We tested them, validated them, and helped make them better.

Baseline Security Mode is now live across Microsoft. And it’s ready for the world.

Capturing real benefits

Baseline Security Mode is more than a set of policies—it’s a platform for proactive defense.

The product team built it to reduce legacy risks and enforce modern security standards across Microsoft 365 workloads. Microsoft Digital validated it in production, surfacing bugs, shaping telemetry, and confirming that the features worked as intended.

We tested 22 features across Office, OneDrive & SharePoint, Substrate, Identity, and Teams. Each one targeted a specific vulnerability—like blocking ActiveX controls, disabling Exchange Web Services, or enforcing phishing-resistant authentication for admins.

We flagged critical ActiveX dependencies in third-party apps —something the product group hadn’t found—which enabled them to initiate removal. That kind of early detection helped fix issues before the features reached customers.

We found regressions in PowerShell and legacy authentication flows. The OneDrive and SharePoint team caught a high-impact bug and worked with the product team to resolve it.

That validation mattered.

We also helped shape the admin experience.

Exception handling was built into the UI. Admins could create security groups, assign users, and manage exclusions directly in the Microsoft 365 admin center.

“There’s no need to handle everything manually,” Philpott says. “Simply click here and then here to disable. It’s a much simpler process.”

Extending benefits to Microsoft customers

Baseline Security Mode is ready for enterprise.

We’ve tested it. We’ve hardened it. And we’ve made it easier to adopt.

Microsoft Digital’s deployment journey helped shape the product into something customers can trust. We didn’t just validate features—we made sure they worked in real-world environments, across diverse teams, and under the pressure of scale.

 The product team designed the features to be enterprise-ready. We ran them through our Tenant Trust Program and work council reviews to ensure compliance across global regions. That gave us confidence—and gave customers confidence too.

The benefits are clear. We’ve reduced our attack surface. We’ve improved compliance. We’ve made it easier for IT teams to enforce security without disrupting workflows. And we’ve laid the groundwork for secure-by-default computing across Microsoft.

 Customers can do the same.

Start small. Run pilots. Monitor impact. Use the tools in the Microsoft 365 admin center to deploy policies, manage exceptions, and guide users through the change. And don’t be afraid to ask for help—our journey has shown that collaboration between deployment teams and product teams makes all the difference.

Baseline Security Mode is ready, and we’re ready to help others adopt it.

Looking ahead

The first wave of Baseline Security Mode—BSM 2025—delivered 22 features across five major workloads. Microsoft Digital helped validate and deploy those features across the enterprise. And the next wave of features is already in motion.

And it’s bigger, with 46 features, more than double what we had in the first round. The product team is expanding coverage to include deeper protocol restrictions, broader app controls, and more granular authentication policies.

We’re also preparing for broader industry adoption.  

Governments, regulators, and enterprise customers are asking for secure-by-default configurations. Baseline Security Mode is our answer. And the next version will make it even easier to adopt.

We’ll continue to lead as Customer Zero. We’ll test new features, validate insights surfaced by telemetry, and share feedback with the product team. We’ll run pilots, monitor impact, and guide users through the change. And we’ll keep pushing for simplicity, scalability, and trust.

Because security isn’t a one-time project— It’s a mindset, and it’s Microsoft’s highest priority.

Key takeaways

Ready to adopt Baseline Security Mode? Here’s some actions we recommend based on our deployment experience:

  • Start with a pilot: Test Baseline Security Mode with a small group of users to identify legacy dependencies and gather feedback before scaling.
  • Use the Microsoft 365 admin center for deployment: Apply policies and manage exceptions directly through the UI—no scripting required.
  • Identify and plan for exceptions early: Work with business units to understand where legacy formats or protocols are still needed and create security groups for exclusions.
  • Communicate proactively with users: Launch campaigns to explain upcoming changes, their impact, and how users can adapt.
  • Validate telemetry and error reporting: Ensure your environment captures enough data to monitor the impact of new policies and troubleshoot effectively.
  • Engage your compliance and governance stakeholders: Review new policies with internal governance teams to ensure alignment with organizational and regional standards.
  • Treat security as an ongoing journey: Continue to monitor, iterate, and evolve your security posture as new threats and features emerge.

The post Hardening our digital defenses with Microsoft Baseline Security Mode appeared first on Inside Track Blog.

]]>
20811
Transforming security and compliance at Microsoft with Windows Hotpatch http://approjects.co.za/?big=insidetrack/blog/transforming-security-and-compliance-at-microsoft-with-windows-hotpatch/ Thu, 02 Oct 2025 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20455 Security updates are essential, and every security admin knows that when it comes to applying these updates, faster is better to mitigate the risk. However, security updates have always come with a catch: Windows needs to reboot to apply them. Reboots mean interrupted productivity and downtime for users. For us at Microsoft Digital, Microsoft’s internal […]

The post Transforming security and compliance at Microsoft with Windows Hotpatch appeared first on Inside Track Blog.

]]>
Security updates are essential, and every security admin knows that when it comes to applying these updates, faster is better to mitigate the risk. However, security updates have always come with a catch: Windows needs to reboot to apply them.

Reboots mean interrupted productivity and downtime for users.

For us at Microsoft Digital, Microsoft’s internal IT organization, Windows Hotpatch changes the equation.

It’s a new way to deliver critical Windows updates without rebooting. That means faster compliance, less downtime, and happier users.

We’re using it across Microsoft and it’s already transforming how we think about security and productivity.

“Hotpatch is helping Microsoft reach compliance faster than ever—no reboots, no delays, secure systems at scale, and a seamless experience that keeps users more productive. The risk exposure window is reduced drastically, making our environment safer and more resilient,” says Harshitha Digumarthi, a senior program manager within Microsoft Digital.

Hotpatch installs updates while the system is running—no reboot required. That means we can patch faster, stay compliant, and keep users happy.

And it’s not just us.

Microsoft enterprise customers are already scaling deployments to millions of devices. We’re seeing a shift in how organizations think about patching and how they can expedite the patch time. Hotpatch is here to help. It’s no longer a disruption, it’s just part of the flow.

Increasing productivity and security with Hotpatch

Hotpatch is a servicing technology that delivers cumulative security updates—released on Patch Tuesday, the second Tuesday of each month—without requiring a system reboot. Instead of replacing binaries on disk and restarting the system, Hotpatch modifies in-memory code while the system is running.

This means updates take effect immediately, with no downtime, no maintenance windows, and no disruption to users.

Hotpatch payloads are small by design. Smaller updates mean faster downloads, quicker installs, and minimal impact on performance. CPU usage stays low. No spikes. No slowdowns. Just updates that run in the background and finish silently.

“The experience is so seamless you don’t even know what happened,” says Nevine Geissa, a partner group program manager within the Windows product team. “There are no process restarts, no logging out, no performance impact. No glitch in the video playing or transaction dropping. Everything just works as if nothing has happened.”

Because hotpatch updates happen so painlessly in the background, IT administrators may want to understand how the process works and what validation steps are involved. That’s why we test hotpatch updates with the same rigorous standards we apply to all our security updates.

A photo of Geissa.

“Hotpatch updates go through the exact same validation and rigor that a standard security update goes through. There is no compromise on quality whatsoever. Your device is always as secure as your non-hotpatch device.”

Nevine Geissa, partner group program manager, Windows Servicing and Delivery

Even in cases of zero-day vulnerabilities, Hotpatch can deliver out-of-band updates to enrolled devices without requiring a reboot.

Hotpatch is available for Windows 11 version 24H2 or later, Windows 365, Azure Virtual Desktop, Windows Server 2022/2025 Azure Edition, and Azure Arc connected Windows Server 2025 Datacenter and Standard editions.

The technology has matured over years of internal development.

“Hotpatch updates go through the exact same validation and rigor that a standard security update goes through,” Geissa says. “There is no compromise on quality whatsoever. You will always be at the exact same level of security.”

Hotpatch has evolved and grown.

“It started as internal server capability in Azure and then expanded to our Windows Server 2022 customers,” says Nikita Deshpande, a senior customer experience program manager within the Windows Servicing and Delivery product team at Microsoft. “The tooling and OS support have matured such that now we can offer Hotpatch to AMD64 and Arm64 client machines now too.”

Hotpatch integrates seamlessly with Autopatch, a cloud-based service from Microsoft that automates the process of keeping Windows devices up to date. Designed for enterprise environments, and powered by Microsoft Intune, Autopatch manages updates for Windows, Microsoft 365 Apps for enterprise, Microsoft Edge, and Microsoft Teams, reducing the manual effort required by IT administrators.

Any new policy in our environment created with Autopatch automatically enables Hotpatch—if the device meets requirements. Admins can set up rings, monitor compliance, and roll out updates with just a few clicks.

“It’s the better together story,” Deshpande says. “Autopatch streamlines everything. Add Hotpatch, and it takes Windows Update to a whole new level.”

Implementing Windows Hotpatch internally at Microsoft

The implementation of Hotpatch at Microsoft Digital involved developing and deploying a feature, as well as establishing trust for customers.

The journey started years ago in Azure with virtual machines, then to Windows Server across physical and virtual instances. Now, it’s on Windows 11 clients and scaling fast, but getting here took deep collaboration.

Our team in Microsoft Digital partnered with the product team from the start. We were co-designers with experience in this space. We helped shape the rollout, validate the experience, and make sure Hotpatch was ready for enterprise scale.

Then we scaled. We expanded to 40,000, then 80,000, then 120,000 devices. We’re on track to reach 450,000 devices at Microsoft in the next four months.

We also wanted a great admin experience enabled for the product. The features help with smooth rollout and the visibility helps admins monitor rollouts and measure impact. We’re continually collaborating with the Windows product team to equip administrators with comprehensive insights and actionable recommendations with Hotpatch.

“We worked closely with the product team to make sure admins had the right metrics to measure the success,” Digumarthi says. “It’s not just about implementation—it’s about knowing it worked.”

We ran early adopter programs and insider rings to gather feedback from across Microsoft. That feedback loop helped refine the experience, improve reporting, and ensure the rollout was smooth.

Achieving security without compromising on productivity

Hotpatching is changing how we think about security.

“With Hotpatch, we’re seeing 81% of Microsoft’s enrolled devices become compliant within 24 hours of Patch Tuesday and 90% of enrolled devices are patched within five days.”

Harshitha Digumarthi, senior program manager, Microsoft Digital

Before, it took our team up to nine months to reach 95% compliance for security patching.

That’s nine months of exposure and nine months of risk.

With Hotpatch, we’re achieving 95% compliance in less than three weeks.

“With Hotpatch, we’re seeing 81% of Microsoft’s enrolled devices become compliant within 24 hours of Patch Tuesday, and 90% of enrolled devices are compliant within five days,” Digumarthi says.

That’s not just faster. It’s safer.

“We’re reducing the risk window,” Digumarthi says. “From vulnerability discovery to patch deployment, we’re closing the gap—without disrupting users.”

And it’s not just internal. Since general availability in April, Hotpatch has scaled to over 4.5 million devices globally. That growth shows trust and momentum.

It also shows value. Admins spend less time chasing updates. End users stay productive. And security teams get the compliance they need—without the friction.

“Hotpatching eliminates the trade-off between security and productivity,” Deshpande says. “You don’t have to choose anymore.”

Improving the user experience

Hotpatching doesn’t just improve security—it transforms the user experience.

For end users, it’s invisible.

Updates happen in the background.

No pop-ups. No restarts. No performance hits.

“It’s so seamless,” Geissa says. “There’s no bubble. No prompt. It just works.”

Even the first few times, users might see a green banner letting them know they’ve been hotpatched.

A photo of Selveraj.

“It’s really helpful as an end user; I feel more secure. I don’t need to keep checking and making sure my device is up to date. It just is.”

Senthil Selvaraj, principal group product manager, Microsoft Digital

It’s subtle. It’s clean.

It’s so effective that it’s become a kind of badge among Microsoft insiders.

“It’s really helpful as an end user—I feel more secure,” says Senthil Selvaraj, a principal group product manager at Microsoft Digital. “I don’t need to keep checking and making sure my device is up to date. It just is.”

That’s the magic.

Hotpatching doesn’t interrupt work—it protects it.

It helps other systems stay current too. When the OS is secure, dependent apps and services can update more reliably. That ripple effect improves the overall health of the device.

Admins also see the benefits. Intune reporting shows which devices are ready, which have updated, and which need attention. That visibility helps IT teams track compliance without chasing down machines or relying on manual checks.

For enterprises, it means fewer help desk calls. Fewer complaints. Fewer delays.

Looking forward

Hotpatching is just getting started.

At Microsoft Digital, we’re expanding from 100K to 450K devices in the next four months. That’s nearly every eligible device in our fleet.

Externally, adoption is accelerating. We’ve gone from zero to almost 4.5 million devices since private preview in November 2024. That includes Microsoft and customer fleets, and the number keeps growing.

But scale is just the beginning.

The product team is exploring ways to improve compliance visibility—giving admins deeper insights into patch status, readiness, and impact. That means better reporting, smarter dashboards, and tighter integration with compliance tools.

We’re also working to make adoption easier.

Documentation is improving, Intune reporting is evolving, and we’re building clearer guidance for customers to validate their environments, understand their risk posture, and deploy Hotpatch confidently.

The vision is simple: secure every device, without disruption.

Key takeaways

Here are several key actions you can take to successfully implement Windows Hotpatch in your organization:

  • Check your eligibility and prerequisites. Understand your eligibility and set up the prerequisites in your environment to be hotpatch-capable.
  • Monitor devices and report compliance. Use Intune and other reporting tools to track device readiness, update status, and compliance, even for unmanaged environments.
  • Communicate the benefits to users. Inform users that hotpatching maintains their ability to reboot while enhancing device security with minimal disruption.
  • Deliver a seamless update experience. Emphasize the uninterrupted, restart-free, and performance-neutral nature of updates for users.

The post Transforming security and compliance at Microsoft with Windows Hotpatch appeared first on Inside Track Blog.

]]>
20455
Closing the deal with Microsoft 365 Copilot for Sales at Microsoft http://approjects.co.za/?big=insidetrack/blog/closing-the-deal-with-microsoft-365-copilot-for-sales-at-microsoft/ Thu, 28 Aug 2025 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20051 Microsoft Digital stories We didn’t just hope Microsoft 365 Copilot for Sales would make a difference for our sales team here at Microsoft—we measured it. The results were outstanding. In the first few months after adopting Copilot for Sales, our sellers experienced the following: These aren’t projections or pilot estimates. They’re real results from real […]

The post Closing the deal with Microsoft 365 Copilot for Sales at Microsoft appeared first on Inside Track Blog.

]]>

Microsoft Digital stories

We didn’t just hope Microsoft 365 Copilot for Sales would make a difference for our sales team here at Microsoft—we measured it.

The results were outstanding.

In the first few months after adopting Copilot for Sales, our sellers experienced the following:

  • Per-seller revenue went up 9.4 percent
  • Opportunity per seller grew by 5 percent
  • Individual win rates jumped by 20 percent

These aren’t projections or pilot estimates. They’re real results from real usage across our sales teams.

These numbers reflect more than just efficiency. They show that when AI is embedded in a sales workflow, it doesn’t just save time—it drives outcomes. Our sellers aren’t hunting for data, they’re capitalizing on leads and closing deals.

The impact is clear.

Copilot for Sales is helping Microsoft sellers focus on what matters most: building relationships, understanding customer needs, and winning business.

Understanding the need for AI-driven transformation

For years, our sellers have worked with a patchwork of tools—MSX (our internal sales platform), Dynamics 365, Outlook, Teams, pipeline trackers, and more.

Each serves a purpose, but together they created friction.

“We have great data across our tools. But it’s a ton of information to drill into while you’re going from call–to–call and you really just need to have a quick answer on something. That hasn’t been easy to do in the past.”

A photo of Dollar.
Jeremy Dollar, principal technology specialist, Microsoft Customer and Partner Solutions

In the past, sellers had to jump between systems to update forecasts, respond to customers, and satisfy internal reporting. The result was a fragmented experience that slowed them down.

Sellers often felt they spent more time updating systems than engaging with customers.

“We have great data across our tools,” says Jeremy Dollar, a principal technology specialist with Microsoft Customer and Partner Solutions (MCAPS). “But it’s a ton of information to drill into while you’re trying to work with the customer. You’re going from call–to–call and you really just need to have a quick answer on something to know in the moment what’s going on with that customer. That hasn’t been easy to do in the past.”

Dollar spends most of his time working in the Microsoft sales pipeline, helping Microsoft customers understand how they can benefit from Microsoft products and finding solutions to match their needs.

Our sellers also have the responsibility of updating sales data and maintaining CRM records.

It isn’t anyone’s favorite task. Whether it’s MSX, Dynamics, Salesforce, or a myriad of other systems, manual data entry feels like a tax on their time. Forecasts lag. Pipelines go stale. And the tools meant to help end up getting in the way.

This complexity creates friction. Sellers have to jump between systems to find customer data, update opportunities, or prepare for meetings. The lack of integration can slow them down and pull focus away from customers.

“It’s the mundane work that nobody likes to do, but we all have to do it. It’s always been a sort of ‘sales tax’ that we all associate with the CRM,” says Bob Lincavicks, a principal technical specialist with Microsoft Customer and Partner Solutions.

The challenge isn’t just technical—it’s behavioral. Many sellers rely on personal systems like notebooks or spreadsheets to track contacts and deals. CRM becomes inconsistent, and data quality suffers as a result. Without a unified experience, sellers are left to stitch together their own workflows.

Moving into modern sales management with Copilot for Sales

Enter Copilot for Sales.

Copilot for Sales is our AI-powered assistant built specifically for sellers. It brings CRM data, productivity tools, and generative AI together—right inside the Microsoft 365 apps sellers already use every day, like Outlook, Teams, Word, and Excel.

It’s not just another chatbot. It’s a deeply integrated experience that helps sellers:

  • Prepare for meetings with AI-generated summaries, recent email threads, and opportunity insights.
  • Draft emails with CRM context and BANT (Budget, Authority, Need, Timeline) analysis.
  • Update CRM records directly from Outlook or Teams—no more switching tabs or searching through systems.
  • Create tasks and follow-ups from meeting recaps with just a couple of clicks.
  • Collaborate in real time using adaptive cards, deal rooms, and shared spaces in Teams.

Copilot for Sales is designed to proactively surface the information and capabilities that sellers need when they need it, in the flow of their work.

It’s built on the same infrastructure as Microsoft 365 Copilot, meaning it benefits from enterprise-grade security, Microsoft Graph data, and seamless integration across the Microsoft ecosystem. And it works with multiple CRM systems like Dynamics 365 and Salesforce, so sellers can stay in the flow of work—regardless of their CRM

The result is less time spent on admin tasks and more time selling.

Keeping sellers in the flow

Copilot for Sales isn’t just a new tool. It’s a new way of working—one that’s built around the seller, not the system.

Copilot for Sales transforms how Microsoft sellers work by delivering high-impact features directly into their daily flow. These aren’t just enhancements—they’re game changers.

Copilot for Sales is there for sellers in the tools they already use—Outlook and Teams. Instead of toggling between systems, they’re seeing customer emails, account insights, opportunity data, and meeting history all in one place.

When a seller opens an email, Copilot surfaces the full conversation thread, shows related opportunities, and highlights recent meetings. During calls, it captures insights and follow-ups. Afterward, it prompts sellers to log notes, create opportunities, and assign tasks—without ever opening the CRM.

This integration saves time and reduces friction. Sellers no longer jump between apps or lose momentum. They work where the action is.

“Copilot for Sales connects all the disparate tools our sellers use. Before, all of that information was spread out and tedious to find. Now it’s readily accessible in the context of where a seller is working, whether in Outlook, Teams, Word, or otherwise.”

Mills poses in a photo.
Denise Mills, senior business program manager, Microsoft Digital

Our sellers use natural language to interact with CRM data. They can ask questions like “What’s the latest with this opportunity?” or “Summarize my last meeting,” and get instant, contextual answers. This cuts down on prep time and helps sellers stay focused.

They can also summarize long email threads with a single click, capture leads from Outlook, and draft replies using CRM insights. Meeting prep is faster, follow-ups are automated, and CRM updates happen in the background.

One standout feature is the ability to create and edit CRM opportunities directly from Outlook. Sellers no longer need to switch tools or re-enter data—they do it all in the moment, in the app they’re currently using.

“Copilot for Sales connects all the disparate tools our sellers use,” says Denise Mills, a senior business program manager with Microsoft Digital, the company’s IT organization. “Before, all of that information was spread out and tedious to find. Now it’s readily accessible in the context of where a seller is working, whether in Outlook, Teams, Word, or otherwise.”

 Copilot for Sales helps sellers to quickly get ready for an upcoming customer engagement. It captures and summarizes the main points from recent interactions, helping sellers focus on the key points and make the meeting meaningful.

These features don’t just make work easier—they make sellers more effective.

Copilot for Sales: a day in the life

Microsoft sellers are using Copilot for Sales to streamline their day from start to finish—and the impact is showing up in every part of their workflow.

When preparing for meetings, sellers are reviewing summaries of past interactions, opportunity history, and forecast comments—all surfaced automatically in Outlook or Teams. They’re walking into customer conversations with full context, without having to dig through CRM records or email threads.

During meetings, Copilot is capturing action items, identifying sentiment, and tagging key participants. Sellers are staying focused on the conversation instead of scrambling to take notes.

After meetings, they’re turning those insights into tasks and CRM updates with just a few clicks. Follow-up emails are being drafted automatically. Opportunities are being created or updated in real time. Sellers are staying in the flow, and nothing is falling through the cracks.

“I don’t feel like I have to spend 90 percent of my call taking notes,” Lincavicks says. “I can be much more present and involved in the call itself.”

Deploying Copilot for Sales at Microsoft

Our journey began with a pilot. Microsoft launched Copilot for Sales inside our small, medium, and corporate (SMC) sales organization, where sellers were managing high account volumes and needed faster, more efficient ways to stay on top of their work. The pilot wasn’t small—eventually, it included the entire SMC group.

From the start, the focus was on learning. The team ran learning days, hosted academies, and delivered demos to help sellers understand how to use Copilot for Sales in their daily flow. These sessions weren’t just about training—they were about listening. Feedback from SMC sellers helped shape the product and guide future development.

That early momentum laid the foundation for broader adoption. It provided real-world insights, real usage data, and a clear sense of what sellers needed most.

Moving to enterprise scale

After proving success in SMC, we began expanding Copilot for Sales across the enterprise. The early pilot gave the team a head start—an audience that was already engaged, trained, and providing feedback. That momentum made it easier to scale.

The rollout wasn’t just technical—it was strategic. We in Microsoft Digital, the company’s IT organization, partnered with Microsoft Customer and Partner Solutions (MCAPS), a global organization within Microsoft that brings together customer-facing teams and partner-facing teams to drive customer success and grow the business.

The MCAPS team worked with our IT change management teams to build a global adoption plan. They ran learning days, hosted demos, and worked closely with adoption leads in every region. The goal was to meet sellers where they were and show them how Copilot could fit into their daily workflows.

Creating real change and improvement as Customer Zero

As the first and most active adopters of Copilot for Sales, our internal sales teams are acting as Customer Zero. By putting Copilot into the hands of our sellers first, we can collect honest, in-depth feedback that the Copilot for Sales product team uses to rapidly iterate on new features based on genuine user needs.

Customer Zero means that the organization is not only adopting the technology but also shaping it, stress-testing workflows, and surfacing opportunities for improvement before sharing with the broader market.

“Our sellers are co-creators, actively influencing the future of Copilot for Sales,” Mills says. “The Customer Zero approach ensures that by the time Copilot for Sales reaches external customers, it’s already been refined by some of the most demanding and insightful users in the industry.”

Mills and her team work as a bridge between Microsoft sellers and the Copilot for Sales product team. They drive feedback loops, gathering insights from internal users—primarily sellers—and deliver that feedback directly to the product teams. This includes organizing research sessions, facilitating direct interactions between sellers and product managers, and ensuring feedback reflects global perspectives. Feedback from regions such as Japan, China, and Europe was crucial, especially in contexts involving language challenges or compliance with local regulations.

One of the primary ways Mills and her team capture feedback is through structured, role-specific listening circles. These sessions bring together our sellers, product specialists, and managers to share what’s working, what’s not, and what they need next from Copilot for Sales.

Each session is guided by a whiteboard framework and includes interactive polls, open discussions, and targeted prompts. We’re asking sellers about their workflows, their friction points, and how Copilot fits—or doesn’t fit—into their day. The goal is to surface barriers to adoption, identify unmet needs, and gather ideas for improvement.

Nurturing an active user community

We’re driving Copilot for Sales adoption through hands-on, role-specific enablement. It started with learning days and live demos across SMC and enterprise teams. These sessions are continuing today, helping sellers see how Copilot fits into their daily flow.

In one session, sellers walk through how to use Copilot to prep for a meeting, capture notes, and update CRM—all without leaving Outlook. In another, they practice using natural language prompts to summarize email threads or generate follow-up tasks. These aren’t just demos—they’re working sessions that show real value.

Champions are playing a key role. Microsoft is identifying early adopters who are enthusiastic about Copilot and equipping them with deep-dive resources. These champions are hosting office hours, answering questions, and sharing tips with their teams.

This grassroots approach is building momentum. Sellers aren’t just learning Copilot for Sales—they’re teaching it, sharing it, and embedding it into how they work.

Creating seller-focused benefits

The result is that Microsoft sellers are going beyond using Copilot for Sales—they’re relying on it.

“The CRM used to feel like a tax. Now it delivers value,” Lincavicks says. ”Copilot for Sales has made CRM data feel alive. For the first time, I feel like there’s the ability for my data to meet me wherever I want to do my work.”

It’s a recurring sentiment across our sellers and it’s a signal that the relationship between sellers and their tools is changing.

We’ve reached a turning point. When we first introduced Copilot for Sales, we were optimistic. We had a vision of what AI could do for sellers, but we didn’t yet have the proof. Now we do.

This isn’t just a story about new features. It’s about measurable impact. Sellers are seeing real gains in productivity, pipeline, and win rates. And they’re doing it without having to change how they work.

While Copilot for Sales certainly provides convenience to sellers, the biggest benefit is confidence.

“Copilot for Sales is incredibly helpful for getting ready for any type of customer engagement,” Dollar says. “It’s incredibly valuable to have that information proactively brought to your attention. It’s not about saving time at the end of the day, it’s about being able to be better and do more with the time that I have.”

The feedback is clear; Copilot for Sales isn’t just helping sellers work faster, it’s helping them work smarter, with more focus, more clarity, and more impact.

“It’s not just about making sellers more impactful—it’s about simplifying their world. They can create opportunities, update contacts, and capture meeting insights—all without leaving their flow of work. Simplification and ease of use drive impact.”

Wooldridge poses in a photo.
Kevin Wooldridge, senior director of business programs, Microsoft Digital

That’s a big deal. Especially in a company like Microsoft, where the sales ecosystem is notoriously complex. If we can simplify the experience here, we can do it anywhere.

This moment matters because we’ve moved from promise to performance. From pilots to platform. From hoping AI would help to knowing it does.

“It’s not just about making sellers more impactful—it’s about simplifying their world,” says Kevin Wooldridge, a senior director of business programs within Microsoft Digital. “They can create opportunities, update contacts, and capture meeting insights—all without leaving their flow of work. Simplification and ease of use drive impact.”

Looking forward

We’ve moved from promise to proof with Copilot for Sales. Now we’re building what’s next.

We believe the future of Copilot for Sales is agentic. We’re evolving from a conversational assistant to a proactive partner that can act on behalf of the seller. The approach is less “tell me what to do” and more “I’ve already done it.”

We’re piloting Lead Intelligence Agents that research prospects, draft outreach, and follow up automatically. Opportunity Intelligence Agents summarize recent interactions, flag risks, and suggest next steps. And Meeting Prep Agents surface possible objections, key stakeholders, and action items before a seller even joins the call.

We’re also integrating Copilot more fully into the tools sellers already use. In Outlook, sellers can now update CRM records directly from email banners, share CRM data with a simple @mention, and generate follow-ups with context-aware prompts. In Teams, meeting recaps now trigger opportunity creation and task assignments—even less jumping between systems.

The next generation of sellers won’t just use AI—they’ll rely on it. And they’ll outperform those who don’t. Sellers using Microsoft 365 Copilot for Sales are already closing more deals, faster. They’re saving time, staying in the flow, and focusing on what matters.

Key takeaways

If you’re looking to successfully adopt Copilot for Sales, here are some practical actions you can take to maximize its advantages:

  • Integrate Copilot for Sales into your sellers’ daily workflow. Encourage your team to start using Copilot for Sales within Outlook and Teams to seamlessly manage CRM updates, generate follow-ups, and complete tasks without leaving your core platforms.
  • Leverage productivity-boosting features. Utilize the automation and smart suggestion features of Copilot for Sales to streamline sales activities, reduce manual effort, and dedicate more time to building strong connections with customers.
  • Hold enablement sessions. Participate in live demos and training events and explore role-specific resources to quickly develop expertise and unlock the full value of Copilot for your team.
  • Prepare for future capabilities. Stay updated on new innovations like Lead Intelligence Agents and Meeting Prep Agents so you can adopt emerging features that will further enhance your sales strategy.

The post Closing the deal with Microsoft 365 Copilot for Sales at Microsoft appeared first on Inside Track Blog.

]]>
20051
Securing the borderless enterprise: How we’re using AI to reinvent our network security http://approjects.co.za/?big=insidetrack/blog/securing-the-borderless-enterprise-how-were-using-ai-to-reinvent-our-network-security/ Thu, 10 Jul 2025 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=19504 The modern enterprise network is complex, to say the least. Enterprises like ours are increasingly adopting hybrid infrastructures that span on-premises data centers, multiple cloud environments, and a diverse array of remote users. In this context, traditional security tools are still playing checkers while the malicious actors are playing chess. To make matters worse, attacks […]

The post Securing the borderless enterprise: How we’re using AI to reinvent our network security appeared first on Inside Track Blog.

]]>
The modern enterprise network is complex, to say the least.

Enterprises like ours are increasingly adopting hybrid infrastructures that span on-premises data centers, multiple cloud environments, and a diverse array of remote users. In this context, traditional security tools are still playing checkers while the malicious actors are playing chess. To make matters worse, attacks are increasingly enabled by AI tools.

That’s why here in Microsoft Digital, the company’s IT organization, we’re using a modern approach and toolset—including AI—to secure our network environment, turning complexity into clarity, one approach, tool, and insight at a time.

Leaving traditional network security behind

For years, traditional network security relied on a simple but increasingly outdated assumption: everything inside the corporate perimeter can be trusted. This model made sense when networks were static, users were on-premises, and applications lived in a centralized data center.

But that world is gone.

A photo of Venkatraman.

“Implicit trust must be replaced with explicit verification. That means rethinking how we monitor, how we respond, and how we design for resilience from the start.”

Raghavendran Venkatraman, principal cloud network engineering manager, Microsoft Digital

Today’s enterprise is dynamic, decentralized, and borderless. Hybrid work has become the norm. Cloud adoption is accelerating. Teams are globally distributed. Devices and data move constantly across environments. In this new reality, the network perimeter hasn’t just shifted—it has effectively vanished.

That’s where the cracks in legacy security models become impossible to ignore.

Visibility becomes fragmented. Security teams struggle to track what’s happening across a sprawling digital estate. Traditional monitoring tools focus on infrastructure uptime or device health—not on the actual experience of the people using the network. That disconnect creates blind spots, and blind spots create risk.

We know that this model no longer meets the needs of a modern, AI-powered enterprise. Every enterprise needs a new approach—one that assumes breach, enforces least-privilege access, and continuously verifies trust.

“Implicit trust must be replaced with explicit verification,” says Raghavendran Venkatraman, a principal cloud network engineering manager in Microsoft Digital. “That means rethinking how we monitor, how we respond, and how we design for resilience from the start.”

This shift is foundational to our security strategy. It’s not just about securing infrastructure—it’s about securing the experience. Because in a world where users, data, and threats are everywhere, trust has to be proved, not assumed.

Building a resilient and adaptive security strategy

To secure hybrid corporate networks effectively, organizations must go beyond traditional perimeter defenses. They need a comprehensive and adaptive security strategy—one that evolves with the threat landscape and aligns with the complexity of modern enterprise environments. The diversity of hybrid networks introduces new vulnerabilities and expands the attack surface. A static, one-size-fits-all approach simply doesn’t work anymore.

At Microsoft Digital, we’ve embraced a layered, cloud-first security model that integrates identity, access, encryption, and monitoring across every layer of the network. It’s embedded in everything we do. This model includes these key strategies, which we’ll expand upon in the following sections:

  • Adopting Zero Trust principles
  • Establishing identity as the new perimeter 
  • Integrating AI and machine learning
  • Enforcing network segmentation
  • Embracing continuous monitoring

Adopting Zero Trust principles

Zero Trust Architecture (ZTA) operates on a strict principle: “never trust, always verify.” That means no user, device, or application—whether it’s inside or outside the corporate network—is inherently trusted as they are in the traditional network security model.

A photo of McCleery.

“Zero Trust isn’t a product—it’s a mindset. It’s about assuming breach and designing defenses that minimize impact and maximize resilience.”

Tom McCleery, principal group cloud network engineer, Microsoft Digital

Every access request is evaluated against dynamic policies. These policies consider several factors—like user identity, device health, location, and how sensitive the data being accessed is. For example, if an employee tries to access a financial report from a corporate laptop at the office, they might get in, no problem. But that same request from a personal device in another country could get blocked or trigger extra authentication steps.

At the heart of ZTA are policy enforcement points that authorize every data flow. These checkpoints only grant access when all conditions are met, and they log every interaction for auditing and threat detection. This kind of granular control reduces the attack surface and limits lateral movement if there is a breach.

Adopting Zero Trust isn’t just a technical upgrade—it’s a strategic must. It boosts an organization’s ability to defend against modern threats like ransomware, insider attacks, and supply chain compromises.

“Zero Trust isn’t a product—it’s a mindset,” says Tom McCleery, a principal group cloud network engineer in Microsoft Digital. “It’s about assuming breach and designing defenses that minimize impact and maximize resilience.”

By embracing Zero Trust, we strengthen our security posture, lowers the risk of data breaches, and responds more effectively to emerging threats.

Establishing identity as the new perimeter

Identity is no longer just a component of security—it has become the new perimeter. Traditional security models focused on defending the network edge, assuming that everything inside the perimeter could be trusted. But in today’s hybrid and cloud-first environments, the perimeter has dissolved and that assumption is outdated and dangerous. Users, devices, and applications now operate across diverse locations and platforms, making perimeter-based defenses insufficient.

Identity-first security shifts the focus from securing the physical network to securing the identities—both human and machine—that interact with the network. This means every access request is treated as though it originates from an untrusted source, regardless of where it comes from. Whether it’s a remote employee logging in from a personal device or an automated workload accessing cloud resources, the system must verify who or what is making the request, assess the risk, and enforce least-privilege access across the user experience.

This approach enables organizations to implement more granular access controls. For example, a developer might be allowed to access a code repository but not production systems, and only during business hours from a managed device.  Similarly, a service account used by a Continuous Integration and Continuous Deployment CI/CD pipeline might be restricted to specific APIs and monitored for anomalous behavior. A CI/CD pipeline is an automated workflow that takes code from development through testing and into production.

By anchoring network security around verified identities, organizations reduce their attack surface and improve their ability to detect and respond to threats. This identity-centric model is not just a security enhancement—it’s a strategic shift that aligns with how modern enterprises operate.

Integrating AI and machine learning 

AI and machine learning (ML) are foundational pillars in our network security strategy. Intelligent automation and advanced analytics help us not only detect and respond to threats, but also continuously improve our security posture in an ever-changing landscape. Here’s how we’re using AI and ML in some critical aspects of our approach to modern network security:

  • Threat detection and intelligence. We deploy AI-powered monitoring tools that sift through billions of network signals and logs across our hybrid infrastructure. By applying sophisticated ML algorithms, we can identify abnormal behaviors such as unusual login attempts or unexpected data transfers that could indicate a potential breach. These insights allow our security teams to focus on the most critical alerts, reducing noise and accelerating incident investigation.
  • Automated response and containment. Through automation, our security systems can respond to threats in real time. For example, if our AI models detect suspicious activity on a device, automated workflows can immediately isolate the affected endpoint, block malicious traffic, or revoke access privileges, all without waiting for manual intervention. This rapid response capability is essential for minimizing the potential impact of attacks and protecting our critical assets.
  • Predictive analysis and proactive defense. We use predictive analytics to forecast emerging vulnerabilities before they can be exploited. By continuously training our models on the latest threat intelligence and attack patterns, we can anticipate risks and strengthen our defenses proactively—whether that means patching vulnerable systems, adjusting access controls, or updating our security policies.
  • User experience monitoring. We use AI to assess the real experience of our users, a critical measurement in a network environment where identity is the perimeter. By correlating performance metrics with security signals, we ensure that our security mechanisms don’t degrade productivity and that any anomalies impacting user experience are promptly addressed.
  • Continuous learning and improvement. Our AI and ML systems are designed to learn from every incident, adapt to new attack techniques, and evolve with the threat landscape. This continuous improvement loop enables our teams to stay ahead of sophisticated adversaries and maintain robust, resilient network security.

Advanced threats require advanced responses. By integrating AI and ML into our network security strategies, we’re enhancing our ability to detect and respond to threats swiftly, minimize potential damage, and foster a secure environment for innovation and collaboration across our global hybrid infrastructure.

Isolating networks to minimize risk

In a hybrid infrastructure, isolating network segments is a foundational security principle. By segmenting networks, we limit the scope of potential breaches and reduce the risk of lateral movement by attackers. For example, separating employee productivity networks from customer-facing systems ensures that if a vulnerability is exploited in one area, it doesn’t cascade across the entire environment.

This is especially critical in environments where sensitive customer data and internal development systems coexist. Our testing and development environments must remain completely isolated—not only from customer-facing services but also from internal productivity tools like email, collaboration platforms, and identity systems. This prevents test code or experimental configurations from inadvertently exposing production systems to risk.

We also establish policy enforcement points (PEPs) within each network segment. These act as control gates, inspecting and filtering traffic between zones. By placing PEPs at strategic boundaries, we can tightly control what moves between segments and detect anomalies early. This architecture ensures that, if a breach occurs, the “blast radius”—the scope of impact—is minimal and contained.

This layered approach to segmentation and isolation is essential for maintaining the integrity of our production systems, minimizing risk, and ensuring that our hybrid infrastructure remains resilient in the face of evolving threats.

Embracing continuous monitoring 

We’ve stopped thinking of monitoring as a one-time check. Now, it’s a continuous conversation with our network.

A photo of Singh.

“Conventional network performance monitoring—monitoring the systems and infrastructure that support our network—can only tell part of the story. To truly understand and meet our requirements, we must monitor user experiences directly.”

Ragini Singh, partner group engineering manager in Microsoft Digital

Continuous monitoring is how we stay ahead of issues before they impact our people. It’s how we keep our hybrid infrastructure resilient, performant, and secure—every second of every day.

We’ve built a monitoring ecosystem that spans our entire global network from on-premises offices to cloud-based services in Azure and software-as-a-service (SaaS) platforms. With the mindset that identity is the new perimeter, we’re using signals from all aspects of our environment and focusing on the user experience.

“Conventional network performance monitoring—monitoring the systems and infrastructure that support our network—can only tell part of the story,” says Ragini Singh, a partner group engineering manager in Microsoft Digital. “To truly understand and meet our requirements, we must monitor user experiences directly.”

This isn’t just about tools and dashboards. It’s about insight. We’re using synthetic and native metrics to build a hop-by-hop view of the user experience. That lets us pinpoint where things go wrong—and fix them fast. We’re even layering in automation to enable self-healing responses when thresholds are breached.

Continuous monitoring is a strategic shift that helps us protect our people, power our services, and deliver the seamless experience our employees expect.

Looking to the future

As enterprises continue to navigate the complexities of hybrid infrastructures, securing enterprise networks requires an agile, multifaceted approach that integrates Zero Trust principles, identity-first security, and advanced technologies like AI and ML. By shifting the focus from traditional perimeter defenses to a more holistic and adaptive security model, organizations can better protect their assets, maintain operational continuity, and foster innovation in an increasingly interconnected world.

Implementing these strategies not only enhances security but also positions organizations to leverage the full potential of their hybrid infrastructures, driving growth and success in the digital age.

Key takeaways

Here are five key actions you can take to strengthen your organization’s network security and embrace a modern approach to network security:

  • Adopt an identity-first security model. Shift your focus from traditional perimeter-based defenses to verifying and securing every user and device identity—regardless of location or network.
  • Integrate AI and machine learning into your security strategy. Continuously improve your security posture by using intelligent automation and analytics to detect, respond to, and predict threats more effectively.
  • Isolate network segments to minimize risk. Separate critical business functions, customer-facing services, and development environments to contain threats and ensure that any potential breach remains limited in scope.
  • Implement continuous monitoring across your hybrid infrastructure. Move beyond periodic checks by establishing real-time, user-centric monitoring to maintain resilience, performance, and rapid incident response.
  • Embrace a proactive, adaptive mindset. Regularly update your security policies, train your teams, and stay agile to address emerging threats and support innovation as your organization evolves.

The post Securing the borderless enterprise: How we’re using AI to reinvent our network security appeared first on Inside Track Blog.

]]>
19504
Enabling modern support at Microsoft with AI http://approjects.co.za/?big=insidetrack/blog/enabling-modern-support-at-microsoft-with-ai/ Thu, 26 Jun 2025 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=19417 We’re using agent-based AI capabilities supported by a continuous improvement (CI) mindset and a comprehensive approach to security to redefine how we meet the support needs of our employees here at Microsoft. How are we doing this? We—Microsoft Digital, the company’s IT organization—are using autonomous AI agents built on platforms such as Microsoft Copilot Studio […]

The post Enabling modern support at Microsoft with AI appeared first on Inside Track Blog.

]]>
We’re using agent-based AI capabilities supported by a continuous improvement (CI) mindset and a comprehensive approach to security to redefine how we meet the support needs of our employees here at Microsoft.

How are we doing this?

We—Microsoft Digital, the company’s IT organization—are using autonomous AI agents built on platforms such as Microsoft Copilot Studio to enhance the experiences our employees have at work, to improve their productivity, and to drive innovation across Microsoft.

A photo of Berghofer.

“Autonomous agents are enabling us to transform how we provide support to our employees while also giving our support agents new tools and capabilities that they’re using to be more strategic and to uplevel the quality of their work.”

Trent Berghofer, general manager, Microsoft Digital Modern Support

Modernizing the support experience with AI-driven agents

We’re pivoting to using AI-driven agents as the primary way we deliver support to our employees. We’re using advances in LLMs and generative AI to create an intelligent support layer that can perceive context, understand intention, and take action to achieve goals. AI’s ability to make decisions in the early stages of the user support journey is reducing our resolution time by streamlining support ticket intake, routing and intent-gathering tasks.

“Autonomous agents are enabling us to transform how we provide support to our employees while also giving our support agents new tools and capabilities that they’re using to be more strategic and to uplevel the quality of their work,” says Trent Berghofer, general manager of the Microsoft Digital Modern Support team. “We think that this is a blueprint our customers and partners can follow and learn from.”

Our agent-based design capitalizes on the strengths of AI orchestration to deliver streamlined, intelligent support experiences that enhance productivity for both employees and IT professionals. Specifically:

  • Contextual understanding: AI agents retain and interpret prior interactions to deliver more relevant and accurate support. This memory-driven context allows them to anticipate needs and tailor responses, improving the quality of assistance over time.
  • Workflow coordination: Agents autonomously plan and execute multi-step tasks, dynamically adjusting their approach based on real-time inputs. This enables seamless handling of complex support workflows with minimal human intervention.
  • Authoritative sources: By integrating with trusted enterprise systems and APIs, agents ensure that actions and recommendations are grounded in reliable, up-to-date information, reducing errors and increasing confidence in outcomes.

“For an employee to stay productive, they need a simple, accessible, and transparent support experience,” says Silvina Olkies, a senior director of Global End User Support Services and Employee Experience in Microsoft Digital.

AI-based autonomous agents serve as the foundation for our innovative self-service solutions and automation. Olkies’ team has designed modern support services to meet our employees in the flow of their work with simple, accessible, and transparent experiences.

“Our vision is for support to start and stay in Copilot. Employees should be able to resolve issues through AI-driven self-service or transition effortlessly to assisted support—without leaving the streamlined interface.”

Silvina Olkies, senior director of Global End User Support Services and Employee Experience, Microsoft Digital

Microsoft 365 Copilot has quickly become our tool of choice to enhance modern work practices at Microsoft. Our employees use Copilot to find answers, work faster, communicate more effectively, and boost creativity.

Copilot is a true personal AI assistant, demonstrating how AI can transform the way people work.

“Our vision is for support to start and stay in Copilot,” Olkies says. “Employees should be able to resolve issues through AI-driven self-service or transition effortlessly to assisted support—without leaving the streamlined interface.”

Copilot is a key enabler of agent-based services that make up our modern support model here at Microsoft Digital. We’re using it to deliver a more efficient, personalized, and proactive support experience while ensuring security and compliance across two major areas:

  • Our Employee Self-Service (ESS) Agent powers the user support experience, enabling a seamless, transparent, and digital-first support model. AI-driven analytics identify patterns, anticipate needs, and optimize support operations proactively, all while maintaining security and protecting user data.
  • AI and CI power IT operations and infrastructure management, equipping us with intelligent insights and automated workflows to enhance project management efficiency, streamline operational issue resolution, and ensure secure and compliant engagements with stakeholders.

CI is a foundational element of our support and operations strategy. With CI, we drive iterative enhancements in our tools and services through structured review cycles and embedded feedback mechanisms. The pace of our business and the technology we use to support it demands this level of agility in correcting issues, adding features, and evolving the way we operate as an organization.

We apply CI methodologies such as root cause analysis, process mapping, and trend monitoring to observe data from ESS Agent interactions, Copilot usage patterns, and survey sentiment. We use the insights from this analysis during sprint retrospectives and operational syncs, where corrective actions are defined, prioritized, and tracked.

Root cause analysis

In one of our tenants, we saw a high number of reopened and escalations tickets related to virtual machine initiation. After reviewing the tickets, we found that users were requesting VMs before their accounts were created, which prevented successful provisioning. We updated the onboarding process to ensure that account creation happens first. As a result, we reduced escalation and productivity churn by 54 percent and improved the overall support experience.

Process mapping

We continuously document processes to ensure a seamless onboarding experience for new services. Traditionally, product managers conduct brownbag sessions to educate support agents on tools and procedures. However, capturing and formalizing this knowledge into Standard Operating Procedures (SOPs) is a time-intensive task, often requiring 6 to 8 hours of effort from our subject matter experts (SMEs).

To enhance efficiency and accuracy, we’re implementing an AI-powered solution that leverages generative AI to transform brownbag discussions into draft SOPs, reducing documentation time to approximately 2 hours or less, while maintaining high-quality standards. With AI-generated drafts supplemented by expert review, we anticipate improvements in both speed and precision, ensuring robust SOPs for product support.

Trend monitoring

To become more proactive in how we support services, we’ve introduced a demand management process that helps us identify and analyze trends in ticket volume and user requests. By using data to spot recurring issues and patterns, we provide structured feedback to Service Owners and Product Groups about defects, onboarding challenges, and service changes. This approach allows us to adapt quickly, reduce future volume, and influence improvements upstream, moving from reactive support to a more predictive and agile model.

This Kaizen-style approach to CI enables us to iterate quickly, using rapid experimentation and fail-fast mindset. As a result, we’re optimizing support workflows and creating sustained reduction in incident recurrence. Incorporating CI practices into our delivery rhythms helps us ensure that every AI-driven improvement is measurable, intentional, and aligned with evolving user needs.

Our combination of AI and CI is built on a foundation of security, in line with the Microsoft Secure Future Initiative (SFI). SFI ensures that three core security principles are built into everything we do at Microsoft:

  • Secure by design. Security comes first when designing any product or service.
  • Secure by default. Security protections are enabled and enforced by default, require no extra effort, and are not optional.
  • Secure operations. Security controls and monitoring will continuously be improved to meet current and future threats.

Within our modern support model, we use AI to deliver intelligent insights, automation, and rapid impact. Embracing CI enables us to make disciplined and continuous advancements that drive long-term efficiency. Security remains our foundation, making sure we safeguard our data, maintain compliance, and ensure trust. Weaving AI, CI, and security together helps us streamline our workflows, reduce our operational burden, and enables us to proactively resolve issues while protecting our users and systems.

The modern support model

The modern support model represented by its key elements: AI capabilities in the user support experience with Autonomous AI Agents and continuous improvement in IT site operations supported by a foundation of security.
AI, CI, and a foundation of security power the modern support experience at Microsoft Digital.

Providing transparent self-service with the Employee Self-Service Agent

Our Employee Self-Service Agent is designed to improve the IT support experience. Driven by the power of Microsoft Copilot Studio, the tool has become the gateway to self-service support for Microsoft employees, and a key part of our agentic AI approach.

The agent consolidates multiple support channels into a single intelligent entry point that facilitates both self-service and seamless escalation when required. It empowers our employees with self-service capabilities for common IT needs, including:

  • Quick answers to common technical issues, including password resets, device setup, connectivity troubleshooting, software updates, and installations.
  • Ticket management features that allow employees to create requests, track status updates, and add comments or files to open tickets.
  • Proactive self-help prompts for top IT actions, such as outage alerts, device compliance status, active ticket updates, connectivity troubleshooting, authentication support, and domain join assistance.

Reimagining support with Copilot

User support is initiated in the Employee Self-Service Agent supported by knowledge and workflows. Integration with ServiceNow is provided by voice calls, live agent chat, and on-site support
Envisioning the user support experience with Copilot and the Employee Self-Serve Agent as the gateway to self-service support at Microsoft.

Our team is constantly improving the AI capabilities in the Employee Self-Service Agent to make support suggestions better, boost automation, and simplify workflows so that employees can access accurate solutions faster. Our Customer Zero approach uses feedback from Microsoft employees to improve how this agentic tool predicts user intent and helps resolve issues.

If the AI-powered capabilities of this tool don’t provide a complete solution to an employee’s support issue, human assistance is instantly available. The goal is that only the most critical and complex issues reach a support professional.

The results so far are encouraging, and the numbers indicate that the agent is creating significant benefits for us, including:

  • Self-service success rates have increased by 36%.
  • Successful self-service information discovery has increased by 34%.
  • User satisfaction ratings for IT support have increased by 18%.

With employees using the Employee Self-Service Agent as the entry point and 95% of interactions starting in Copilot-based interfaces, the focus is on improving Copilot features to create an even better user experience.

Improving IT site operations and infrastructure management with AI

We’re combining AI-driven capabilities, continuous improvement, and a comprehensive security framework to build effective IT operations and infrastructure management solutions that our modern support model revolves around.

IT Operations uses a range of internally developed AI agents tailored to specific scenarios. These agents enhance operational decision-making, streamline communications, and strengthen the security and efficiency of IT service delivery at Microsoft.

AI-driven capabilities for IT professionals

Device and infrastructure management, insights for decision-making and streamlined stakeholder management are supported by AI agents.
We’re supporting IT professionals with AI-driven infrastructure management, insights for decision-making, and streamlined engagement with stakeholders.

Here are some key scenarios where AI is revolutionizing IT site operations and infrastructure management:

  • Device and infrastructure management. We’re using agents to proactively manage endpoint devices, network infrastructure, and security compliance. For example, our engineering teams have developed three custom-built agents: Network Data Infrastructure, Device Care, and Network Spare Equipment Inventory. These agents optimize the processes for infrastructure deployments, resolving connectivity issues and service disruptions throughout our global infrastructure, while also helping to keep our devices healthy and reliable.
  • Insights for IT operations and strategic decision-making. AI-powered analytics enhance sentiment analysis, track service metrics, and drive proactive issue resolution. We’ve developed the Insight Foundation agent to help our IT operations teams analyze trends, interpret self-service survey results, and enable data-driven inquiries.
  • Streamlined stakeholder engagement. We’re developing AI-powered automation to consolidate key datasets for IT and corporate function partners, offering tailored insights on service health and program roadmaps. Business stakeholders will get seamless access to our service health and roadmap information tailored to specific roles and business functions to support timely planning and prioritization activities.

Our aspiration is that agent-based AI will autonomously drive strategic investment planning, seamless infrastructure deployment, and self-healing operations—maximizing efficiency, accelerating value, and ensuring a resilient digital workplace with minimal human intervention.

“As we continue our journey, agent-based AI is helping us rethink how we plan, build, and run our infrastructure. From smarter investments to self-healing operations, it’s all about creating a resilient digital workplace that runs smoother, faster, and with less manual effort.”

Simon Price, senior director of IT field management, Microsoft Digital

Creating a connected future with modern support at scale

Our AI-powered modern support model is already delivering tangible business impact. Our self-service IT support resolution has increased by 36%, demonstrating the impact of automation and AI on IT support. Additionally, faster agent response times, powered by real-time telemetry, and reduced IT ticket volumes further highlight this transformation.

As our support team continues to refine the AI agent-driven modern support approach, they’re committed to increasing user satisfaction and ensuring we realize increasing value from AI-driven tools.

Our team is also growing our investment in self-service solutions and continuing to modernize IT support by prioritizing AI-driven automation and predictive analytics, including investigating new ways to use Copilot capabilities for real-time insights and remediation. We’re reimagining support for the AI era. The modern support experience—built on AI-driven agents, CI, and security—ensures Microsoft employees are empowered by fast, seamless, and intelligent IT assistance and IT operations.

“As we continue our journey, agent-based AI is helping us rethink how we plan, build, and run our infrastructure,” says Simon Price, senior director of IT field management at Microsoft Digital. “From smarter investments to self-healing operations, it’s all about creating a resilient digital workplace that runs smoother, faster, and with less manual effort.”

The future of IT support is here. And at Microsoft Digital, we’re making it happen today.

Key takeaways

Here are four essential learnings you can apply to improve IT support and operations in your organization:

  • Enhance self-service capabilities. Empower employees with tools like the ESS Agent to address common IT needs quickly and independently.
  • Build proactive solutions. Use AI-driven insights and automation to predict user needs and resolve issues before they escalate.
  • Streamline IT operations. Explore AI opportunities across your unique scenarios, whether in infrastructure automation, operational intelligence, or stakeholder collaboration, to drive greater efficiency and strategic value
  • Focus on employee satisfaction. Continuously improve support tools and experiences to boost user trust and engagement.

The post Enabling modern support at Microsoft with AI appeared first on Inside Track Blog.

]]>
19417