AI deployment and adoption Archives - Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/tag/ai-deployment-and-adoption/ How Microsoft does IT Fri, 17 Apr 2026 22:57:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 137088546 Reclaiming engineering time with AI in Azure DevOps at Microsoft http://approjects.co.za/?big=insidetrack/blog/reclaiming-engineering-time-with-ai-in-azure-devops-at-microsoft/ Thu, 16 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23161 At Microsoft Digital, the company’s IT organization, we’re reimagining how engineers, product managers, and program managers work. Microsoft Azure DevOps (ADO) is our company’s end-to-end software development lifecycle (SDLC) solution for planning, coding, testing, and delivery. It combines tools for work tracking, source control, pipelines, and artifacts so teams can manage the entire SDLC in […]

The post Reclaiming engineering time with AI in Azure DevOps at Microsoft appeared first on Inside Track Blog.

]]>
At Microsoft Digital, the company’s IT organization, we’re reimagining how engineers, product managers, and program managers work.

Microsoft Azure DevOps (ADO) is our company’s end-to-end software development lifecycle (SDLC) solution for planning, coding, testing, and delivery. It combines tools for work tracking, source control, pipelines, and artifacts so teams can manage the entire SDLC in one environment.

Although ADO excels at streamlining the development process, we found that users were still spending significant time performing repetitive administrative tasks, like creating and breaking down work items, writing and managing queries for reporting, and reclaiming lost permissions.

Our Engineering Systems Platform team successfully embedded AI into ADO, resulting in ADO experiences that replace manual workflows and free up our IT professionals to concentrate on work that makes a real impact.

Identifying the opportunity

The Engineering Systems Platform team supports 15,000 active users across one of the largest ADO platforms at Microsoft.

A photo of Panigrahy.

“We saw the toll these processes took on users, whether they were compiling information or performing manual tasks. Even with automation, there was still an opportunity to give time back to engineers.”

Gopal Panigrahy, principal product manager, Microsoft Digital

Three years ago, the team began exploring opportunities to automate repetitive ADO tasks like creating and updating work items, navigating project data, gathering statuses, and breaking large initiatives into sprint-ready work.

While they found ways to automate some of these tasks, they discovered decision-making and information synthesis still consumed valuable time and occasionally introduced some human errors.

“We saw the toll these processes took on users, whether they were compiling information or performing manual tasks,” says Gopal Panigrahy, a principal product manager in Microsoft Digital. “Even with automation, there was still an opportunity to give time back to engineers.”

Adding AI to ADO workflows

ADO spans a vast area at Microsoft, serving a wide range of enterprise use cases and personas. What these workers have in common is heavy workloads. With this in mind, different categories of ADO users expressed the desire for AI-powered experiences that could help streamline workflows and speed up day-to-day development tasks.

As generative AI matured, our team explored whether they could embed AI technology inside ADO to act as a real-time assistant, handling administrative work and answering contextual questions using natural language.

A photo of Sahoo.

“We saw it as a win-win experiment. If we could give engineers back in ADO, they could spend it building, not managing artifacts.”

Debashis Sahoo, principal group engineering manager, Microsoft Digital

The guiding principles of the experiment were simple: Stay in context and preserve user control while aligning with existing ADO permissions and processes.

That vision led to the creation of two complementary Microsoft Copilot agents: The DevOps Assistant and the AI Work Item Assistant.

“We saw it as a win-win experiment,” says Debashis Sahoo, a principal group engineering manager in Microsoft Digital. “If we could give engineers time back in ADO, they could spend it building, not managing artifacts.”

What makes this initiative distinctive is it brings AI closer to the core ADO product and its users. It allows for secure, confidential, and context-rich ADO data to be used safely for meaningful AI-powered experiences.

DevOps Assistant offers conversational, in-context support

DevOps Assistant is a chat‑based experience present in the ADO user interface (UI). It’s activated in a side panel where users can ask natural language questions to retrieve information, check project statuses, and run common DevOps actions without navigating away from their main ADO display.

The DevOps Assistant enables cross-source discovery, which reduces context switching and discovery time and helps lower the cognitive load for engineers and product managers. By reducing the time it takes to switch contexts and search for information, the DevOps Assistant helps ADO users move faster and stay focused on product delivery.

Under the hood, the DevOps Assistant is a constellation of specialized agents, each of which is focused on a different segment of the DevOps lifecycle:

  • Work Item Agent creates, refines, and scopes work into sprint-ready backlogs
  • Knowledge Board Agent surfaces the right DevOps knowledge at the right moment
  • Permission Agent handles access and permission requests
  • Bulk Complete Agent runs repetitive, large-scale updates
  • Sprint Board Agent summarizes sprint status and provides instant, prompt‑driven insights
A photo of Gupta.

“We didn’t just build a chatbot. We built a distributed system of agents that understands the intent of the DevOps user and acts on it securely and in context.”

Apoorv Gupta, principal software engineer, Microsoft Digital

Agents are built in Copilot Studio and coordinated by Orchestrator Agent, Copilot Studio’s front door.

For example, if a user asks to create or refine work items, the Orchestrator Agent routes the request to the Work Item Agent to handle. If the question is about permissions, then it delegates the work to the Permission Agent. It does this for each task.

“We didn’t just build a chatbot,” says Apoorv Gupta, a principal software engineer in Microsoft Digital. “We built a distributed system of agents that understands the intent of DevOps user and acts on it securely and in context.”

At present, the DevOps Assistant is available across all our internal ADO environments at Microsoft. The plan is to make it available to external customers soon.

AI Work Item Assistant provides inline assistance

The AI Work Item Assistant is a real-time embedded experience within ADO work items. Powered by Microsoft Foundry, it helps users create and refine work items using context and business requirements.

The assistant works immersively, keeping users focused and within ADO as they structure work items or generate child items from the parent.

For product and program managers who start with high‑level ideas, the assistant understands intent. It can automatically suggest logical, sprint‑ready breakdowns, helping to dramatically reduce the time spent on planning, sorting, and prioritizing work items.

Screenshot showing the “Use AI to edit this item” button in the Azure DevOps UI.
The AI Work Item Assistant is just a click away in Azure DevOps work items.

Turning newfound time into innovation

The key to reclaiming time for your workforce isn’t just the introduction of new AI-driven features. It’s using the technology to enforce structure and quality at the beginning, so that everything downstream moves faster.

Panigrahy describes the practice as three reinforcing feedback loops.

The first loop is upstream quality amplification. AI agents help consistently structure work items with clear acceptance criteria and templates. The structure then feeds other tools (such as GitHub Copilot), allowing them to generate higher-quality code and more predictable outcomes—shortening the overall software development lifecycle.

The second feedback loop is acceleration of execution. In a typical sprint planning session, a team of eight engineers might:

  • Take an hour (or more) to manually break user stories into more than 100 tasks
  • Create different tasks in their own style, introducing inconsistency and ambiguity
  • Generate uneven details, then spend time clarifying data later

With DevOps Assistant and AI Work Item Assistant, that same task breakdown turns into a prompt-driven action that no longer requires hours of work.

“It burns a lot of time for everyone to manually create each item in their own way, making sure they’re using the correct inputs from the product manager and confirming they aren’t missing anything,” Panigrahy says. “Now, with AI magic, it takes less than three minutes.”

The third feedback loop is capacity reinvestment. Instead of spending hours on tactical DevOps mechanics, teams can now spend more time on engineering judgment, resulting in better estimation, technical decisions, and design. They can use these reclaimed hours to learn new tools, experiment with new agents, and innovate on the SDLC.

“Capacity saving keeps giving back, in a loop,” Gupta says. “You get more capacity back. You innovate. You learn. You do better.”

What’s next on the AI-in-ADO journey

The DevOps Assistant and the AI Work Item Assistant can help change user behavior, shifting from time spent doing tactical DevOps tasks to performing higher‑value, judgment-based work. These tools can help teams increase work quality and reduce wasted time.

“Our next chapter is about making AI smarter, more action-oriented, and truly agentic,” Sahoo says. “The goal is to reduce cognitive load and allow the experience to live wherever users are—from Azure DevOps to Microsoft Teams and Microsoft 365—so the agent works seamlessly across their workflow.”

AI-driven productivity gains are arguably the biggest opportunity in the industry. It’s fundamentally redefining the engineering experience at an unprecedented pace.

“While we’ve made huge strides embedding AI into the everyday Azure DevOps experience, it still feels like we’re just getting started,” Sahoo says. “Staying relevant means continuously evolving to deliver ever-greater value and efficiency to engineers.”

Key takeaways

Keep these tips in mind as you get started on your own journey with AI and Microsoft ADO:

  • Treat AI as a strategic accelerator, not as an add-on. Identify where your engineering process can use AI to move from simple assistance to transforming your workflows.
  • Target high-effort, high-volume tasks first. Analyze where your teams are spending significant manual time, even if AI tools are already in place in those workflows.
  • Validate productivity with measurable data, not intuition. Track time reclaimed, workflow efficiency, reduction in manual steps, and user satisfaction. Tangible data can help your initiative earn trust and justify the expansion of AI tool use on your team.

The post Reclaiming engineering time with AI in Azure DevOps at Microsoft appeared first on Inside Track Blog.

]]>
23161
Microsoft CISO advice: How to build Trustworthy Agentic AI http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-how-to-build-trustworthy-agentic-ai/ Thu, 16 Apr 2026 15:15:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23173 Building production-ready solutions with agentic AI comes with inherent risks. When agents make mistakes or hallucinate, the potential impacts can multiply rapidly. “It turns out that it’s very easy to write AI-powered software, but it’s very hard to write AI-powered software that works right in real-world cases,” says Yonatan Zunger, CVP and deputy CISO for […]

The post Microsoft CISO advice: How to build Trustworthy Agentic AI appeared first on Inside Track Blog.

]]>
Building production-ready solutions with agentic AI comes with inherent risks. When agents make mistakes or hallucinate, the potential impacts can multiply rapidly.

“It turns out that it’s very easy to write AI-powered software, but it’s very hard to write AI-powered software that works right in real-world cases,” says Yonatan Zunger, CVP and deputy CISO for Microsoft.

Yunger explains how important it is to test if you want to build trustworthy agentic AI.

Watch this video to see Yonatan Zunger explain how to build trustworthy agentic AI. (For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=eNU7c48541M)

Key takeaways

Here are best practices to apply while building trustworthy agentic AI:

  • Prototype. Test. Iterate. Think of and try prompts your real users might give your agentic AI. Use real data. From those trials, build a set of test cases and keep testing.
  • Use AI tools to amplify testing. Evaluating agents requires a “try it and repeat it” mindset. Using AI Foundry with such tools as Python Risk Identification Tool amplifies these assessment capabilities.
  • Record your tests. Applying this practice, as you would with unit testing, enables you to repeat evaluations as your data models and agents evolve.
  • Don’t skimp on testing. Test early, test often, test with real data. This is the best way to understand what your agent might do when it encounters the unexpected.

The post Microsoft CISO advice: How to build Trustworthy Agentic AI appeared first on Inside Track Blog.

]]>
23173
Skilling up for the future of work at Microsoft with Agent Launchpad http://approjects.co.za/?big=insidetrack/blog/skilling-up-for-the-future-of-work-at-microsoft-with-agent-launchpad/ Thu, 16 Apr 2026 15:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23116 As AI continues to evolve and its applications across business workflows expand, it can be difficult for employees to stay on top of the latest developments. One of the most exciting shifts underway is our move toward AI agents, which are systems capable of taking autonomous action to accomplish tasks and achieve goals using models, […]

The post Skilling up for the future of work at Microsoft with Agent Launchpad appeared first on Inside Track Blog.

]]>
As AI continues to evolve and its applications across business workflows expand, it can be difficult for employees to stay on top of the latest developments. One of the most exciting shifts underway is our move toward AI agents, which are systems capable of taking autonomous action to accomplish tasks and achieve goals using models, tools, and multistep reasoning.

With agent usage growing rapidly, our team here in Microsoft Digital, the company’s IT organization, has invested in events and learning sessions to help employees adopt agentic approaches and get more value from Microsoft 365 Copilot.

One example was Camp Copilot, a peer‑led virtual training event dedicated to building employee Copilot skills. We also offered a Copilot Expo, which delivered a more formal, large‑scale learning program focused on role‑specific skills and deeper daily usage.

Now, we’ve consolidated learnings from those programs into Agent Launchpad, an accessible, multifaceted six‑module curriculum. Our instructional program is designed to develop our employees’ agentic AI skills, empowering them to take advantage of existing agents in their day-to-day work and build their knowledge and confidence to create new agents.

Why we built Agent Launchpad

Companies that fail to grasp the growing role of AI and agents in the workplace risk falling behind teams and organizations that are already redesigning their work around hybrid human-agent teams. We created Agent Launchpad to acknowledge this shift, demonstrate the power of agents, and show how they can be integrated into everyone’s daily work.

Unlike basic assistants that only respond to direct prompts, agents can plan, carry out actions, monitor progress, and iterate until they meet a goal. They can perform tasks like drafting content, analyzing data, automating workflows, scheduling meetings, triggering processes, and coordinating across multiple apps and services.

A photo of Wooldridge.

“Think of an agent as like hiring a really intelligent, enthusiastic university graduate. They may not have deep business experience yet, but they bring a high level of intelligence, energy, and scalability to the tasks you give them.”

Kevin Wooldridge, senior director of business programs, Microsoft Digital

At a higher level, agents can act as proactive collaborators, taking on routine tasks so human workers can focus on higher‑value thinking. Employees who aren’t engineers can create agentic tools, which becomes a cultural differentiator.

“Think of an agent as like hiring a really intelligent, enthusiastic university graduate,” says Kevin Wooldridge, a senior director of business programs in Microsoft Digital. “They may not have deep business experience yet, but they bring a high level of intelligence, energy, and scalability to the tasks you give them.”

Understanding how agents work is the new baseline for staying competitive. It’s the defining trait of the emerging Frontier Firm: A human‑led, agent‑operated organization designed for the AI era. Workers become agent bosses who define outcomes, while autonomous agents plan, reason, and run the workflows to deliver them.

How Agent Launchpad enables agent adoption

Integrating agents into existing workflows and processes can feel overwhelming. Our Agent Launchpad curriculum can help our employees get the most out of the technology.

A photo of Heath.

“Our employees told us they didn’t want someone lecturing over slides. They wanted peer‑to‑peer learning, storytelling, showcases, and hands‑on experiences.”

Tom Heath, senior business program manager, Microsoft Digital

To build our curriculum, our team incorporated input from a variety of stakeholders across Microsoft representing a range of backgrounds and technical expertise. They also included feedback from the Copilot Champs Community.

“Our employees told us they didn’t want someone lecturing over slides,” says Tom Heath, a senior business program manager in Microsoft Digital. “They wanted peer‑to‑peer learning, storytelling, showcases, and hands‑on experiences.”

Baked into our Agent Launchpad program are:

  • Detailed, approachable explanations of the existing agents available in the Copilot ecosystem
  • Practical guidance for how to use the agents
  • Step-by-step, hands-on labs for building new agents—regardless of the employee’s level of technical expertise

“People were being bombarded with information about agents, many of which were already live,” says Stephan Kerametlian, a senior director of business program management in Microsoft Digital. “Launchpad became a way to bring clarity and help them discover what already exists.”

Our curriculum explains how to get the most out of available agents, like our Employee Self-Service Agent. It also supports employees who want to build their own agents, whether by using Agent Builder for no‑code development or utilizing Copilot Studio for light coding (otherwise known as pro‑coding).

“Launchpad covers that full end‑to‑end journey at a time when information feels scattered and overwhelming,” Kerametlian says. “It gives people a structured, guided, modular path from the fundamentals all the way to developing agents, if that aligns with their skills and needs.”

Built for flexibility: Our Agent Launchpad curriculum

Given the broad range of skills and goals that our employees bring to the learning process, our six-module curriculum format was designed around two different tracks: The Explorer path and the Builder path.

A photo of Kneip.

“We talk about ‘buffet‑style learning’ a lot at Microsoft, and that applies here—but with AI and agents, many people don’t even know what they need. That’s why we built two learning paths.”

Cadie Kneip, senior business program manager, Microsoft Digital

Participants can sign up for live sessions or, if they prefer a self-guided approach, they can move through our modules on their own schedule. Learners have the option to earn participation badges by finishing modules, completing paths, or achieving other milestones within the curriculum.

“We talk about ‘buffet‑style learning’ a lot at Microsoft, and that applies here—but with AI and agents, many people don’t even know what they need,” says Cadie Kneip, a senior business program manager in Microsoft Digital. “That’s why we built two learning paths. We don’t believe everyone needs to be a builder, but everyone benefits from using agents to do their best work. Our goal is high‑quality agents and great usage experiences.”

Each path aligns with specific parts of our curriculum:

  • Explorer path, Modules 1-3: Offering both context-setting information as well as examples and usage guidance for existing Copilot agents, our first three modules are for those who want to understand broader agentic context and enhance their day‑to‑day work with available agentic options.
  • Builder path, Modules 1-6: For those who want to build their own agents, our full curriculum includes not only the first three modules but also no‑code agent development in Agent Builder (Module 4), agents that involve pro-coding via Copilot Studio (Module 5), and a showcase for new agents with recorded demos and use cases (Module 6).

As an enterprise-level company, Microsoft employs people with a wide variety of skills and backgrounds. That’s part of why Agent Launchpad works: People can choose their own agentic adventure.

“Launchpad provides a centralized starting point, with clear signposting to other assets and a sense of community. It lets us scale across the company and meet people where they are,” Wooldridge says. “If someone is deeply technical, there’s a path for them. If someone isn’t technical but wants to understand the hype and experiment, there’s a path for them too.”

The Frontier Firm mindset: A new way to think about work

While our Agent Launchpad curriculum includes detailed technical guidance for using and building agents, it’s also vital to emphasize the Frontier Firm mindset that employees need as we collectively approach a new era of AI-based work.

A photo of Jones.

“When our core team was designing what Agent Launchpad would look like, we wanted to make sure we weren’t just tackling the technology, but also the mindset and behavioral changes that come with it.”

Alexandra Jones, director of business programs, Microsoft Digital

In the near future, a human‑led, agent‑operated organization built for the AI era—one in which humans define the outcomes they want, but agents decide how to achieve them—will become the new norm. The first module in our curriculum is designed to make sure that concept lands with learners, and it could be the most important part of the training.

“When our core team was designing what Agent Launchpad would look like, we wanted to make sure we weren’t just tackling the technology, but also the mindset and behavioral changes that come with it,” says Alexandra Jones, a director of business programs in Microsoft Digital. “That’s why we decided to cover the concept of the Frontier Firm—why people’s mindsets need to shift, and how we can address common concerns about AI.”

Agentic AI: A shifting landscape

Given the pace of innovation in the AI landscape, our Agent Launchpad program needed to be resilient, flexible, and minimally dependent on product documentation that might soon be outdated.

“It’s challenging to anticipate people’s needs in such a fast‑moving environment,” Wooldridge says. “We’re only slightly ahead of our employees on this journey ourselves, so we’re learning what’s valuable at the same time they are. That means we’re constantly recreating or updating content—it’s a hamster wheel of creation, delivery, revision, and more delivery.”

The pace of change is an ongoing challenge.

A photo of Kerametlian.

“All of this is part of our evolution. Our first immersive learning experience was Camp Copilot. We learned from that and evolved it into Copilot Expo. Now we’ve iterated again and built Agent Launchpad. It’s essentially version 3.0—the best of what we learned from the earlier programs, retooled around agents.”

Stephan Kerametlian, senior director of business program management, Microsoft Digital

New agents ship constantly. The tools evolve every day, and the technology moves at lightning speed. Keeping Agent Launchpad current remains a priority, and our curriculum is continuously adapting.

“All of this is part of our evolution,” Kerametlian says. “Our first immersive learning experience was Camp Copilot. We learned from that and evolved it into Copilot Expo. Now we’ve iterated again and built Agent Launchpad. It’s essentially version 3.0—the best of what we learned from the earlier programs, retooled around agents.”

Driving interest: Enthusiastic responses to Agent Launchpad

Employees are seeing the value of our curriculum, as strong usage data indicates broad interest in the program. In addition to online engagement with our coursework, thousands of our employees have attended in-person sessions. It’s a level of participation that helps drive the goals of both agentic adoption and the Frontier Firm mindset at Microsoft.

Feedback has been overwhelmingly positive, with employees reporting high satisfaction along with a demonstrable uplift in weekly active agent usage across Microsoft. Many thoughtful recommendations have been captured and turned into insights that will inform our next phase of Agent Launchpad.

“Launchpad unexpectedly became extremely popular—it was supposed to be our pilot, and we didn’t promote it heavily,” Kneip says. “Because of that huge engagement, we want to find more ways to lean into rewards and celebrate people who submit their work, so people feel recognized and come back to learn with us again.”

Key takeaways

Here are some things to keep in mind as you develop your own training programs around the new agentic way of working:

  • Understanding how agents work is the new baseline for staying competitive. This is the defining trait of the emerging Frontier Firm: A human‑led, agent‑operated organization built for the AI era.
  • Agent Launchpad delivers insights to employees about the fast moving agentic AI landscape. By building on our experiences with Camp Copilot and Copilot Expo, the program gives learners a structured, approachable way to understand, use, and build AI agents in their daily work.
  • The curriculum is designed to meet employees where they are. With Explorer and Builder paths, Agent Launchpad supports both agent adoption and agent creation—regardless of technical background or learning style.
  • The program helps employees develop a Frontier Firm mindset. The curriculum emphasizes not just how agents work, but how human led, agent operated teams are reshaping the future of work and the new habits we all need to build to leverage them.
  • Strong engagement and Copilot usage shows that our participants are benefiting from the program. High participation rates and increased agent usage across Microsoft signal growing confidence, capability, and enthusiasm for agentic AI among employees.

The post Skilling up for the future of work at Microsoft with Agent Launchpad appeared first on Inside Track Blog.

]]>
23116
Powering the technical veracity of AI at Microsoft with a Center of Excellence http://approjects.co.za/?big=insidetrack/blog/powering-the-technical-veracity-of-ai-at-microsoft-with-a-center-of-excellence/ Thu, 16 Apr 2026 14:15:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23147 When we launched our AI Center of Excellence (CoE) in 2023, we had a straightforward goal: Help our organization experiment with AI, learn quickly, and do it responsibly. Our teams across Microsoft Digital—the company’s internal IT organization—leaned in. We built tools, workflows, and AI enabled solutions at speed. Momentum followed, along with real enthusiasm and […]

The post Powering the technical veracity of AI at Microsoft with a Center of Excellence appeared first on Inside Track Blog.

]]>
When we launched our AI Center of Excellence (CoE) in 2023, we had a straightforward goal: Help our organization experiment with AI, learn quickly, and do it responsibly.

Our teams across Microsoft Digital—the company’s internal IT organization—leaned in. We built tools, workflows, and AI enabled solutions at speed. Momentum followed, along with real enthusiasm and growth.

A photo of Wu.

“We did a lot of good work building community and excitement. But at some point, we needed to evolve and put more structure around what we’d built.”

Qingsu Wu, principal group product manager, Microsoft Digital

But increasing scale required us to evolve our approach.

As adoption accelerated, we began to see duplication, uneven governance, and growing gaps between strategy and delivery. What helped us move fast early on wasn’t enough to sustain impact over time.

“We did a lot of good work building community and excitement,” says Qingsu Wu, a principal group product manager who leads the AI CoE at Microsoft Digital. “But at some point, we needed to evolve and put more structure around what we’d built.”

AI agents and solutions began appearing across Microsoft Digital. Different teams solved similar problems. Standards were interpreted differently. Reporting was inconsistent, and in many cases manual.

The question was no longer, “How do we help teams try AI?” It became, “How do we turn AI into consistent, measurable outcomes at scale?”

Answering that question required a change in how our CoE operated.

Rather than acting primarily as an advisory group, the AI CoE evolved into an execution‑focused function. Its role expanded from guidance to coordination, helping set priorities, define guardrails, and connect AI work directly to business outcomes.

The goal wasn’t to slow AI innovation down, but to help it move in the correct direction with more agility and better scalability.

Evaluating AI for Microsoft

The AI CoE connects AI strategy to execution across Microsoft Digital. It operates as a cross‑functional coordination layer that sets direction and creates shared accountability for how AI work gets done.

A photo of Khetan.

“We can see patterns that a single team can’t. We’re translating AI CoE strategy and enterprise priorities into clear execution plans that work in each organization’s context. That helps us align priorities and make sure the biggest bets are actually landing.”

Ria Khetan, senior program manager, Microsoft Digital

The CoE brings our leaders and practitioners together from AI, data, responsible AI, and operations to answer questions collectively. We use that cross‑disciplinary view to operate above individual projects without losing touch with day‑to‑day reality.

The CoE looks across the organization and answers questions individual teams can’t answer on their own.

  • What AI initiatives are already in flight?
  • Which ones matter most to the business?
  • Where are teams duplicating effort?
  • Where do we need clearer standards or stronger governance?

“We can see patterns that a single team can’t,” says Ria Khetan, a senior program manager in Microsoft Digital who helps lead program management for the AI CoE. “We’re translating AI CoE strategy and enterprise priorities into clear execution plans that work in each organization’s context. That helps us align priorities and make sure the biggest bets are actually landing.”

We’ve designed the AI CoE to act as the connective tissue between leadership intent and execution on the ground. It helps ensure that AI work across Microsoft Digital moves forward with purpose, consistency, and measurable impact.

Building transformation on core pillars

The AI CoE establishes a common structure that helps our teams work toward the same outcomes, even when they are building different solutions.

A photo of Campbell.

“We use the CoE to bring consistency to how AI work gets done. It gives us a way to step back and ask whether we’re solving the right problems and whether we’re set up to scale.”

Don Campbell, principal group technical program manager, Microsoft Digital

The operating model is intentionally simple.

AI initiatives are reviewed against shared pillars that help teams think beyond individual projects. These lenses ensure the work aligns to business priorities, can scale safely, has a clear delivery path, and supports responsible adoption.

“We use the CoE to bring consistency to how AI work gets done,” says Don Campbell, a principal group technical program manager who leads AI strategy here in Microsoft Digital. “It gives us a way to step back and ask whether we’re solving the right problems and whether we’re set up to scale.”

Our CoE uses these four pillars to guide our work:

  • Strategy. We work with product and feature teams to determine what we want to achieve with AI. They define business goals and prioritize the most important implementations and investments.
  • Architecture. We enable infrastructure, data, services, security, privacy, scalability, accessibility, and interoperability for all our AI use cases.
  • Roadmap. We build and manage implementation plans for all our AI projects, including tools, technologies, responsibilities, targets, and performance measurement.
  • Culture. We foster collaboration, innovation, education, and responsible AI among our stakeholders.

These pillars are the common language that helps us connect strategy to execution and make decisions across all teams and scenarios at Microsoft Digital.

Strategy

Our CoE strategy team’s role is to step back and create clarity.

Our strategy is driven from the organization’s top level, and executive sponsorship is crucial to executing our implementation well. When our transformation mandate comes from the organization’s leader, it resonates in every corner of the organization, every piece of work, and every task. We also encourage and welcome ideas from every level of the organization, empowering individuals to contribute their AI insights.

We maintain a centralized view of AI initiatives across Microsoft Digital, including agents, workflows, and AI‑enabled solutions. That visibility allows our CoE team to identify duplication, surface opportunities to scale successful ideas, and align investments to enterprise priorities. This creates a shared intake and prioritization model.

One of our CoE strategy team’s most significant responsibilities is prioritizing the idea pipeline for AI solutions. All employees can feed ideas into the pipeline through a form that records important details. The strategy team then evaluates each idea, analyzing two primary metrics:

  • Business value. How important is the solution to our business? Potential cost reduction, market opportunity, and user impact all factor into business value. As our business value increases, so does the idea’s position in the pipeline priority queue.
  • Implementation effort. We focus on clearly defining the problem statement—what the problem is, why it matters, who the customer is, the baseline metrics, and the plan to attribute value pre‑production. This ensures we prioritize AI for the most critical business problems and can measure impact before and after deployment.

By anchoring AI work in business outcomes from the start, the strategy pillar helps ensure the organization’s energy is spent on the work that matters most.

Architecture

Our architecture pillar defines how we help teams scale AI solutions without creating security gaps, compliance issues, or technical debt they’ll have to unwind later.

“The CoE introduces a framework to enable design reviews in the early development phase. We help make sure teams are choosing the right platforms and thinking about security and compliance from the beginning.”

Qingsu Wu, principal group product manager, Microsoft Digital

Before solutions move into broader use, our architecture team helps think through data readiness, platform alignment, and governance requirements. The goal isn’t to prescribe a single architecture, but to make sure foundational decisions won’t limit scale or create risk down the line. Many times, this means doing things before development, while other times it means making improvements after the initial development is done and the product or scenario is launched and being used. We also track our efforts with measurable metrics like usage.

One common pitfall is that teams may gravitate toward the most flexible platforms with full control, without fully understanding the associated security and compliance implications. To address this, we publish clear guidance to help teams choose the right platform—one that strikes the appropriate balance between flexibility and the security and compliance effort required.

Our architecture pillar helps prevent that by reinforcing a set of common expectations. Teams still build locally and move fast, but they do so within a framework that supports reuse, interoperability, and responsible operation built on enabling teams and employees to experiment with guardrails that keep our production systems safe.

“The CoE introduces a framework to enable design reviews in the early development phase,” Wu says. “We help make sure teams are choosing the right platforms and thinking about security and compliance from the beginning.”

Teams are encouraged to build on recommended platforms and services that support enterprise‑grade security, observability, and lifecycle management. This helps ensure solutions can be monitored, governed, and supported over time.

Security and compliance are never treated as downstream checkpoints. Architectural guidance reinforces the need to design with identity, access controls, auditability, and responsible AI principles from the start.

When solutions prove valuable, we look for opportunities to reuse architectural patterns, components, or services rather than rebuilding them in isolation. This reduces duplication and accelerates future work.

Roadmap

Our CoE roadmap team examines our employee experience in the context of our AI solutions and governs how we achieve the optimal experience in and throughout AI projects. It focuses on how our employees will interact with AI. Getting the roadmap right ensures user experiences are cohesive and align with our broader employee experience goals.

We’ve recognized AI’s potential to impact how our employees get their work done.

Their experiences and satisfaction levels with AI services and tools are critical. Our roadmap pillar is designed to encourage experiences across all these services and tools that are complementary and cohesive.

We’re focusing on the open nature of AI interaction.

“We’re surfacing AI capabilities and information when the user needs them, according to their context,” Campbell says. “It makes the user experience and user interface for an AI service less important than how the service allows other applications or user interfaces to interact with it and harness its power.”

A key part of this approach is disciplined experimentation.

Rather than treating every idea as a long‑term commitment, the roadmap pillar helps teams validate value early. Our teams know when they’re in an experimental phase and when they’re expected to operationalize. This gives our leaders a more consistent view of progress and risk. The net result is that dependencies between teams surface earlier, when they’re easier to resolve.

Culture

Our culture pillar ensures that AI adoption across Microsoft Digital is intentional, responsible, and sustainable.

Culture underpins everything we do in the AI space. Ensuring our employees can increase their AI skillsets and access guidance for using AI responsibly are critical to AI at Microsoft.

“We’re driving a shift from ad‑hoc AI usage to intentional, outcome‑driven adoption,” Khetan says. “That requires clarity, education, and shared expectations.”

In practice, that means the culture pillar defines how our teams are expected to adopt AI and integrate it into their work, not just what tools they can use.

Our culture team works with AI champions across the organization to translate enterprise AI priorities into local execution. Those champions act as two‑way conduits, bringing real‑world feedback and blockers back to the CoE and carrying guidance, standards, and learnings back to their teams.

Without this structure, AI adoption tends to fragment as teams experiment in isolation.

Our culture team has published training, recommended practices, and our shared learnings on next-generation AI capabilities. We work with individual business groups at Microsoft to determine the needs of all the disciplines across the organization. That work extends to groups as diverse as engineering, facilities and real estate, human resources, legal, sales, and marketing, among others. 

Responsible AI is embedded throughout that work.

The CoE reinforces responsible AI practices as part of everyday decision‑making—during design, experimentation, and scale. Teams are expected to understand not just what they’re building, but the implications of how they build it.

In the AI CoE, culture isn’t abstract. It shows up in how teams propose ideas, how they design solutions and how they measure success.

Fostering agent innovation

The true value of the AI CoE is evident when strategy, architecture, roadmap, and culture come together around real work.

A clear example of that is how we addressed the rapid growth of AI agents across the organization.

A photo of Tiwari.

“That’s the core problem we’re trying to solve. In the past, admins had to go to multiple portals just to understand how many agents exist, and they all give different answers.”

Garima Tiwari, principal product manager, Microsoft Digital

Our teams were building agents in different platforms, for different scenarios, and at very different levels of maturity. That flexibility accelerated innovation, but it also made it difficult to answer basic questions.

  • How many agents exist today?
  • Which ones are in production?
  • Which ones touch sensitive data?

The strategy lens helped clarify what mattered most. Our goal wasn’t to inventory every experiment. It was to gain visibility into agents that were active, scaling, or depended on by others, and to ensure those agents aligned to business priorities and Responsible AI expectations.

Architecture quickly followed.

As the CoE looked at how agents were built, we quickly discovered that information about agents was fragmented across tools. Different platforms showed different numbers. Ownership wasn’t always clear. And governance signals were hard to reconcile.

“That’s the core problem we’re trying to solve,” says Garima Tiwari, a principal product manager in Microsoft Digital leading our internal strategy and adoption of Agent 365. “In the past, admins had to go to multiple portals just to understand how many agents exist, and they all give different answers.”

This is where Agent 365—which we use to govern agents here at Microsoft—became a critical enabler.

Agent 365 brings together signals from multiple agent‑building platforms into a single, consolidated view. That visibility allows the CoE and administrators to understand agent inventory, ownership, lifecycle state, and governance posture in one place.

“Agent 365 is really about accurate inventory and observability,” Garima says. “It provides one number we can trust and a way to see how agents are behaving, who they’re interacting with, and whether they’re compliant.”

That architectural clarity changed how decisions were made.

Instead of guessing what was safe to scale, the CoE could see which agents were production‑ready, which needed remediation, and which should remain in experimentation. Security, privacy, and compliance considerations moved to earlier in the lifecycle.

“We can’t scale what we don’t understand,” Wu says. “Agent 365 helps us see what’s actually running so we’re not scaling something blindly.”

The roadmap lens then brought structure to execution.

“What changed was the mindset. Teams started thinking about manageability, security, and scale much earlier, not after an agent was already deployed.”

Don Campbell, principal group technical program manager, Microsoft Digital

Rather than standardizing everything at once, the CoE helped teams sequence work. Some agents stayed in pilot. Others moved toward broader rollout, informed by architectural and governance signals surfaced through Agent 365.

Culture and enablement ran alongside that work.

Teams began factoring operational readiness into design decisions instead of treating governance as a final checkpoint. Agent 365 isn’t positioned as a control tool at the end of the process, but as part of building agents the right way from the start.

“What changed was the mindset,” Campbell says. “Teams started thinking about manageability, security, and scale much earlier, not after an agent was already deployed.”

The outcome wasn’t a single standardized solution.

It was a repeatable approach within a shared CoE framework, supported by platforms like Agent 365, that made scaling AI more visible, more manageable, and more intentional.

That’s what the AI CoE enables at Microsoft Digital.

Key takeaways

If you’re just starting to consider AI usage at your organization, or if you’re already creating a standardized approach to AI, consider the following:

  • Start with outcomes, not tools. AI work scales faster when teams align on the business problem first and select technology second.
  • Design for scale from day one. Early architectural decisions around data, security, and platforms determine whether solutions can grow—or need to be rebuilt.
  • Make experimentation disciplined. Clear paths from prototype to production help teams move fast without committing to ideas that haven’t proven value.
  • Treat governance as an enabler, not a gate. Visibility and manageability, supported by platforms like Agent 365, make it easier to scale AI responsibly.
  • Create shared accountability. Standard metrics and automated reporting turn AI activity into measurable progress.

The post Powering the technical veracity of AI at Microsoft with a Center of Excellence appeared first on Inside Track Blog.

]]>
23147
Conditioning our unstructured data for AI at Microsoft http://approjects.co.za/?big=insidetrack/blog/conditioning-our-unstructured-data-for-ai-at-microsoft/ Thu, 09 Apr 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23020 Anyone who has ever stumbled across an old SharePoint site or outdated shared folder at work knows firsthand how quickly documentation can fall out of date and become inaccurate. Humans can usually spot the signs of outdated information and exclude it when answering a question or addressing a work topic. But what happens when there’s […]

The post Conditioning our unstructured data for AI at Microsoft appeared first on Inside Track Blog.

]]>
Anyone who has ever stumbled across an old SharePoint site or outdated shared folder at work knows firsthand how quickly documentation can fall out of date and become inaccurate.

Humans can usually spot the signs of outdated information and exclude it when answering a question or addressing a work topic. But what happens when there’s no human in the loop?

At Microsoft, we’ve embraced the power and speed of agentic solutions across the enterprise. This means we’re at the forefront of developing and implementing innovative tools like the Employee Self-Service Agent, a chat-based solution that uses AI to address thousands of IT support issues and human resources (HR) queries every month—queries that used to be handled by humans. Early results from the tool show great promise for increased efficiency and time savings.

In developing tools like this agent, we were confronted with a challenge: How do we make sure all the unstructured data the tool was trained on is relevant and reliable?

Many organizations are facing this daunting task in the age of AI. Unlike structured data, which is well organized and more easily ingested by AI tools, the sprawling and unverified nature of unstructured data poses some tricky problems for agentic tool development. Tackling this challenge is often referred to as data conditioning.

Read on to see how we at Microsoft Digital—the company’s IT organization—are handling data conditioning across the company, and how you can follow our lead in your own organization.

How AI has changed the game

We already fundamentally understand that the power of AI and large language models has changed the game for many work tasks. The way employee support functions is no exception to this sweeping change.

A photo of Finney.

“A tool like the Employee Self-Service Agent doesn’t know if something is true or false—it only sees information it can use and present. That’s why stale or outdated information is such a risk, unless you manage it up front.”

David Finney, director of IT Service Management, Microsoft Digital

Instead of relying on human agents to answer employee questions or resolve issues, we now have AI agents trained on vast corpora of data that can find the answer to a complex question in seconds.

But in our drive to give these tools access to everything they might need, they sometimes end up consuming information that isn’t helpful.

“A tool like the Employee Self-Service Agent doesn’t know if something is true or false—it only sees information it can use and present,” says David Finney, director of IT Service Management. “That’s why stale or outdated information is such a risk, unless you manage it up front.”

Before AI, support teams didn’t need to worry as much about the buried issues with unstructured content because a human could generally spot it or filter it out manually. After we turned these tools loose, they began reading everything, including:

  • Older or hidden SharePoint content that humans would never find—but AI can
  • Large knowledge base articles with buried incorrect information
  • Region-specific content that’s not properly labeled

“For example, humans never saw the old, decommissioned SharePoint sites because they were automatically redirected,” says Kevin Verdeck, a senior IT service operations engineer. “But AI definitely could find them, and it surfaced ancient information that we didn’t even know was still out there.”

Data governance is the key

A major part of the solution to this problem is better governance. We had to get a handle on our data.

A photo of Cherel.

“We needed to determine the owners of the sites and then establish processes for reviewing content, updating it, and defining how it should be structured. I would highly encourage that our customers think about governance first when they are launching their own AI tools, because everything flows from it.”

Olivier Cherel, senior business process manager, Microsoft Digital

The first step was a massive cleanup effort, including removing decommissioned SharePoint sites and deleting references to retired programs and policies. The next step was making sure all content had ownership assigned to establish who would be maintaining it. This was followed by setting up schedules for regular content updates (lifecycle management).

Governance was the first priority for IT content, according to Olivier Cherel, a senior business process manager in Microsoft Digital.

“We had no governance in place for all the SharePoint sites, which were managed by the various IT teams,” Cherel says. “We needed to determine the owners of the sites and then establish processes for reviewing content, updating it, and defining how it should be structured. I would highly encourage that our customers think about governance first when they are launching their own AI tools, because everything flows from it.”

Content governance was also a huge challenge for other support areas, such as human resources. A coordinated approach was needed.

“HR content is vast, distributed across multiple SharePoint sites, and not everything has a clear owner,” says Shipra Gupta, an engineering PM lead in Human Resources who worked on the Employee Self-Service Agent project. “So, we collaborated with our content and People Operations teams to create a true content strategy: one source of truth, no duplication, with clear ownership and lifecycle management.”

Cherel observes that this process forces teams to think about their support content in a totally different way.

“People realize they need a new function on their team: content management,” he says. “You can’t simply rely on the knowledge found in the technicians’ heads anymore.”

Adding structure to the unstructured data

The simple truth is that part of what makes unstructured data so difficult for agentic AI tools to deal with is that it’s disorganized.

A photo of Gupta.

“Our HR Web content already had tagging for many policy documents, which helped us get started. But it wasn’t consistent across all content, so improved tagging became a big part of our governance effort.”

Shipra Gupta, engineering PM lead, Human Resources

AI works best with content that has as many of the following characteristics as possible:

  • Document structure, including:
    • Clear headers and sections
    • Page-level summaries
    • Ordered steps and lists
    • Explicit labels for processes
    • HTML tags (which AI can see, but humans can’t)
  • Structured metadata, including:
    • Region codes (e.g., US-only policies)
    • Device-specific tags
    • Secure device classification
    • Country-based hardware procurement policies and HR rules

This kind of formatting and metadata allows the AI tool to more clearly parse and sort the information, meaning its answers are going to have a much higher accuracy level (even if it might be a little slower to return them).

“A good example here is tagging,” Gupta says. “Our HR Web content already had tagging for many policy documents, which helped us get started. But it wasn’t consistent across all content, so improved tagging became a big part of our governance effort.”

Be sure that as part of your content review, you’re setting aside the time and resources to add this kind of structure to your unstructured data. The investment will pay off in the long run.

Using AI to help condition data for use

As AI tools grow more sophisticated, we’re using them to directly work on AI-related challenges. This includes using AI on the challenge of unstructured data itself.

“Right now, these efforts are primarily human-led, but we are applying AI to, for example, help write knowledge base articles,” Cherel says. “Also, we’re starting to use AI to determine where we have content gaps, and to analyze the feedback we’re getting on the tool itself. If we just rely on humans, it’s not going to scale. We need to leverage AI to stay on top of things and keep improving the tools.”

Essentially, the future of such technology is all about using AI to improve itself.

“We’re looking at building an agent to help validate content,” Finney says. “We can use it to check for outdated references, old processes, or abandoned terms that are no longer used. Essentially, we’ll have AI do a readiness check on the content that it is consuming.”

Ultimately, the better the data is conditioned, the more accurate and relevant the agent’s responses will be. And that will make the end user—the truly important human in the loop—much happier with the final outcome.

Key takeaways

We’ve highlighted some insights to keep in mind as you consider how to condition your own organization’s data for ingestion by AI tools:

  • Unstructured data becomes a business risk when AI is in the loop. AI agents consume everything they can access, including outdated, hidden, or conflicting content, making data conditioning a critical prerequisite for agentic solutions.
  • AI highlights content issues that were previously invisible. Decommissioned SharePoint sites, outdated policies, and region-specific content without proper labels all became visible after AI agents began scanning across systems.
  • Governance is a vital part of the conditioning process. Assigning clear content ownership and establishing lifecycle management are essential steps in ensuring the content being fed to AI tools is of high quality and is well managed.
  • Adding structure to data dramatically improves AI accuracy. Clear document formatting, consistent tagging, and rich metadata help AI agents return more relevant, reliable answers.
  • AI will increasingly be used to condition and validate the data it consumes. Microsoft is already exploring using AI to identify content gaps, analyze feedback, and flag outdated information, creating a continuous improvement loop that can scale faster than human review alone.

The post Conditioning our unstructured data for AI at Microsoft appeared first on Inside Track Blog.

]]>
23020
Microsoft CISO advice: The importance of a written AI safety plan http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-the-importance-of-a-written-ai-safety-plan/ Thu, 09 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23016 Yonatan Zunger, CVP and Deputy CISO for Microsoft, has spent his career considering complex questions with security and privacy while building platform infrastructure and solutions. His experience underpins his advice on how to build a safety plan for working with AI. First and foremost, his advice is to have a written plan. “Make it an […]

The post Microsoft CISO advice: The importance of a written AI safety plan appeared first on Inside Track Blog.

]]>
Yonatan Zunger, CVP and Deputy CISO for Microsoft, has spent his career considering complex questions with security and privacy while building platform infrastructure and solutions. His experience underpins his advice on how to build a safety plan for working with AI. First and foremost, his advice is to have a written plan.

“Make it an expectation in your organization that people will create safety plans and have them for everything,” Zunger says. “People get so excited about having clarity in front of them that they end up making much more systematic, careful plans, and the rate of errors goes down dramatically.”

Watch this video to see Yonatan Zunger discuss his advice for creating an AI safety plan. (For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=H5reZ0uw0EA

Key takeaways

Here are questions and ideas to consider as you create a safety plan for your AI systems:

  • Define the problem. What problem are you trying to solve? A simple and clear problem statement is always a great starting point before building anything, including an AI agent.
  • Outline the solution. What is the basis of your solution? Can you explain your solution to an end user? What does a developer or administrative user of your solution need to know about what it is and does?
  • List the things that can go wrong. What can go wrong with your solution? Creating this list is the first step to figuring out how to deal with those issues.
  • Document your plan. What is your plan to address identified concerns? Identify the process you will follow when something goes wrong.
  • Draft your plan early and update it as your solution matures. Your safety plan can be as simple as a list or outline and should evolve as you prepare to build your solution.
  • Get feedback and buy-in. When you review the plan with stakeholders and leaders in your team and organization, you may uncover risks or issues you had not thought of. You also build awareness and agreement on what to do when something goes wrong.
  • Make a template and build its use into your processes. This tip is for anyone who leads a team or influences process development. Encourage using a safety template in all your projects to bring clarity and structure to how you work with AI.

The post Microsoft CISO advice: The importance of a written AI safety plan appeared first on Inside Track Blog.

]]>
23016
Harnessing AI: How a data council is powering our unified data strategy at Microsoft http://approjects.co.za/?big=insidetrack/blog/harnessing-ai-how-a-data-council-is-powering-our-unified-data-strategy-at-microsoft/ Thu, 09 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23030 Information technology is an ever-evolving landscape. Artificial Intelligence is accelerating that evolution, providing employees with unprecedented access to information and insights. Data-driven decision making has never been more critical for businesses to achieve their goals. In light of this priority, we have established a Microsoft Digital Data Council to help accelerate our companywide AI-powered transformation. […]

The post Harnessing AI: How a data council is powering our unified data strategy at Microsoft appeared first on Inside Track Blog.

]]>
Information technology is an ever-evolving landscape. Artificial Intelligence is accelerating that evolution, providing employees with unprecedented access to information and insights. Data-driven decision making has never been more critical for businesses to achieve their goals.

In light of this priority, we have established a Microsoft Digital Data Council to help accelerate our companywide AI-powered transformation.

Our data council is a cross-functional team with representation from multiple domains within Microsoft, including Microsoft Digital, the company’s IT organization; Corporate, External, and Legal Affairs (CELA); and Finance.

A photo of Tripathi.

“By championing robust data governance, literacy, and responsible data practices, our data council is a crucial part of our AI-powered transformation. It turns enterprise data into a strategic capability that fuels predictive insights and intelligent outcomes across the organization.”

Naval Tripathi, principal engineering manager, Microsoft Digital

Our data council’s mission is to drive transformative business impact by establishing a cohesive data strategy across Microsoft Digital, empowering interconnected analytics and AI at scale. Our vision is to guide our organization toward Frontier Firm maturity through a clear blueprint for high-quality, reliable, AI-ready data delivered on trusted, scalable platforms.

“By championing robust data governance, literacy, and responsible data practices, our data council is a crucial part of our AI-powered transformation,” says Naval Tripathi, principal engineering manager in Microsoft Digital. “It turns enterprise data into a strategic capability that fuels predictive insights and intelligent outcomes across the organization.”

Our evolving data strategy

Over the past two decades, we at Microsoft—along with other large enterprises—have continuously evolved our data strategies in search of the right balance between control and agility. Early approaches were highly decentralized, with different teams owning and managing their own data assets. While this enabled local optimization, it also resulted in inconsistent quality and limited enterprise-wide insight.

Our subsequent shift toward centralized data platforms brought much-needed standardization, security, and scalability. However, as data platforms grew more sophisticated, ownership often drifted away from the business domains closest to the data, slowing responsiveness and diluting accountability.

Today, we and other leading companies are embracing a more balanced, federated approach, often described as a data mesh. Rather than forcing all our data into a single centralized system or allowing unchecked decentralization, the data mesh formalizes domain ownership while embedding governance, quality, and interoperability directly into shared platforms.

With this approach, our domain teams publish data as well-defined, discoverable products, while common standards for security, metadata, and compliance are enforced through automation rather than manual processes. This model preserves enterprise trust and consistency without sacrificing speed or autonomy.

By adopting a data mesh mindset, we can scale analytics and AI more effectively across the organization while still keeping ownership closely connected to the business focus. The result is a system that supports innovation at the edges, strong governance at the core, and seamless collaboration across domains, enabling the transformation of data from a technical asset to a strategic, enterprise-wide capability.

Quality, accessibility, and governance

To scale enterprise data and AI, organizations must first ensure their data is trusted, discoverable, and responsibly governed. At Microsoft Digital, our data strategy is designed to create data foundations that power intelligent applications and effective decision making across the company.

A photo of Uribe.

“High-quality, well-governed data is essential to accelerate implementation and adoption of AI tools. Data quality, accessibility, and governance are imperatives for AI systems to function effectively, and recognizing that is propelling our data strategy.”

Miguel Uribe, principal PM manager, Microsoft Digital

By implementing a data mesh strategy at scale, we aim to unlock valuable data insights and analytics, enabling advanced AI scenarios. Our data council focuses on three core dimensions that make AI-ready data possible:

  • Quality: Making sure enterprise data is reliable and complete
  • Accessibility: Enabling secure and discoverable access to data
  • Governance: Protecting and managing our data responsibly

Together, these dimensions form the foundation for scalable innovation and AI-powered data use. They connect data silos and ensure consistent, high‑quality access across the enterprise—enabling both humans and AI systems to work from the same trusted data foundation. As AI use cases mature, this foundation allows AI agents to retrieve and reason over data through enterprise endpoints, while supporting advanced analytics, data science, and broader technology.

“High-quality, well-governed data is essential to accelerate implementation and adoption of AI tools,” says Miguel Uribe, a principal PM manager in Microsoft Digital. “Data quality, accessibility, and governance are imperatives for AI systems to function effectively, and recognizing that is propelling our data strategy.”

Quality

AI-ready data is available, complete, accurate, and high-quality. By adopting this standard, our data scientists, engineers, and even our AI agents are better able to locate, process, and govern the information needed to drive our organization and maximize AI efficiencies.

By utilizing Microsoft Purview, our data council can oversee the monitoring of data attributes to ensure fidelity. It also monitors parameters to enforce standards for accuracy and completeness.

Accessibility

Ensuring that our employees get access to the information they need while prioritizing security is a foundational element of our enterprise data strategy. Microsoft Fabric allows us to unify our organization’s siloed data in a single “mesh” that enables advanced analytics, data science, data visualization and other connected scenarios.

Microsoft Purview then gives us the ability to democratize that data responsibly. By implementing a data mesh architecture, our employees can work confidently, unencumbered by siloed or inaccessible data, and with the assurance that the data they’re working with is secure.

A graphic shows how the data mesh architecture allows employees to access data they need, with platform services and data management zones surrounding this architecture.
The data mesh architecture enables our employees to do their work efficiently while preventing the data they’re working on from becoming siloed.

The data mesh connects and distributes data products across domains, enabling shared data access and compute while scaling beyond centralized architectures.

Platform services are standardized blueprints that embed security, interoperability, policies, standards, and core capabilities—providing guardrails that enable speed without fragmentation.

Data management zones provide centralized governance capabilities for policy enforcement, lineage, observability, compliance, and enterprise-wide trust.  

Governance

As organizations scale AI capabilities, strong governance becomes essential to ensure security, compliance, and ethical data use. Data governance—which includes establishing data policies, ensuring data privacy and security, and promoting ethical AI usage—is critical, as is compliance with General Data Protection Regulation (GDPR) and Consumer Data Protection Act (CDPA) regulations, among others.

However, governance is not only a technical capability; it’s also a cultural commitment.

Responsible data use must be embedded into the way teams manage data and build AI solutions. Through Microsoft Purview, we implemented an end-to-end governance framework that automates the discovery, classification, and protection of sensitive data across the enterprise data landscape.

This unified approach allows teams to innovate confidently, knowing that the data powering their insights and AI systems is trusted and protected, as well as responsibly managed.

“AI systems are only as reliable as the data that powers them,” Uribe says. “By investing in trusted and well-managed data, we accelerate not only the adoption of AI tools but our ability to generate meaningful insights and intelligent outcomes.”

The data catalog as the discovery layer

By serving as a common discovery layer for humans and AI, the data catalog ensures that governance translates directly into speed, accuracy, and trust at scale.

A unified data strategy only succeeds if both people and AI systems can consistently find the right data. At Microsoft, this is enabled by our enterprise data catalog, which operationalizes the standards set by our data council. 

For business users, the catalog provides intuitive search, ownership transparency, and trust signals—enabling confident self‑service analytics. For AI agents, the same catalog exposes machine‑readable metadata, allowing agents to programmatically discover canonical datasets, validate schema and freshness, and respect governance constraints.

Our role as Customer Zero

In Microsoft Digital, we operate as Customer Zero for the company’s enterprise solutions, so that our customers don’t have to.

That means we do more than adopt new products early. We deploy them at enterprise-scale, operate them under real‑world constraints, and hold them to the same standards our customers expect. The result is more resilient, ready‑to‑use solutions and a higher quality bar for every enterprise customer we serve.

A photo of Baccino.

“When we engage product teams with real telemetry from how data is created, governed, and consumed at scale, we move the conversation from theory to execution. That’s how enterprise readiness becomes real.”

Diego Baccino, principal software engineering manager, Microsoft Digital

Our data council embodies this Customer Zero mindset through its Enterprise Readiness initiative. By engaging product engineering as a unified enterprise voice, the council drives strategic conversations that surface operational blockers, influence roadmap prioritization, and ensure new and existing data solutions are truly ready for enterprise use.

These learnings are then shared broadly across Microsoft Digital to accelerate adoption, reduce duplication, and scale proven patterns across teams.

“When we engage product teams with real telemetry from how data is created, governed, and consumed at scale, we move the conversation from theory to execution,” says Diego Baccino, a principal software engineering manager in Microsoft Digital and a member of the council. “That’s how enterprise readiness becomes real.”

This work is deeply integrated with our AI Center of Excellence (CoE), where Customer Zero principles are applied to accelerate AI outcomes responsibly. Together, the AI CoE and the data council focus on improving data documentation and quality—foundational capabilities that are required to make AI feasible, trustworthy, and scalable across the enterprise.

By grounding AI innovation in measurable data quality and governance standards, Microsoft Digital ensures that experimentation can safely mature into production‑ready solutions. The partnership between our data council, our AI CoE, and our Responsible AI (RAI) Council is essential to our broader data and AI strategy.

“AI readiness isn’t aspirational—it’s operational,” Baccino says. “By measuring the health of our data, setting clear quality baselines, and using those signals to guide product and platform decisions, we turn data into a strategic asset and AI into a repeatable capability.”

Together, these teams exemplify what it means to be Customer Zero: Transforming enterprise experience into action, governance into acceleration, and data into durable competitive advantage.

Advancing our data culture

Our data council plays a pivotal role in advancing the organization transition from data literacy to enterprise data and AI capability. In conjunction with our AI CoE, it creates curricula and sponsors learning pathways, operational practices, and community programs to equip our employees with the skills and mindset required to thrive in a data- and AI-centric world.

While early efforts focused on improving data literacy, our data council ’s mission has evolved to enable data and AI capability at scale together with our AI CoE—where employees not only understand data but can effectively apply it to build, operate, and govern intelligent solutions.

“Our focus is not just teaching our teams about data. It is enabling employees to apply data to create AI-driven outcomes. When teams understand how data powers AI systems, they can make better decisions, design better products, and build more responsible AI experiences.”

Miguel Uribe, principal product manager, Microsoft Digital

Our curriculum includes high-level courses on data concepts, applications, and extensibility of AI tools like Microsoft 365 Copilot, as well as data products like Microsoft Purview and Microsoft Fabric.

By facilitating AI and data training, offering internally focused data and AI certifications, and internal community engagement, our council ensures that employees develop the capabilities required to responsibly build and operate AI-powered solutions. Achieving data and AI certifications not only promotes career development through improved data literacy, it also enhances the broader data-driven culture within our organization.

“We recognize that AI capability is built when data skills are applied directly to real AI scenarios and business outcomes—not when learning exists in isolation,” Uribe says. “Our focus is not just teaching our teams about data; it is enabling employees to apply data to create AI‑driven outcomes. When teams understand how data powers AI systems, they can make better decisions, design better products, and build more responsible AI experiences.”

Lessons learned

Our data council was created to develop and execute a cohesive data strategy across Microsoft Digital and to foster a strong data culture within our organization. Over time, several critical lessons have emerged.

Executive sponsorship enables transformation

Executive sponsorship is a key element to ensure implementation and adoption of a data strategy. Our leaders are committed to delivering and sustaining a robust data strategy and culture and have been effective champions of the council’s work.

“Leadership provides support and reinforcement of the council’s mission, as well as guidance and clarity related to diverse organizational priorities,” Baccino says.

Cross-functional collaboration accelerates impact

Our council’s work has also benefited from the diverse representation offered by different disciplines across our organization. Embracing diverse perspectives and understanding various organizational priorities is critical to implementing a successful data strategy and culture in a large and complex organization like Microsoft Digital.

Modern platforms allow for scalable AI productivity

Technology and architecture also play a critical role in enabling enterprise data and AI capability. Platforms like Microsoft Purview and Microsoft Fabric provide the governance, discovery, and analytics infrastructure required to create trusted, AI-ready data ecosystems.

Combined with strong leadership support and community engagement, these platforms allow our organization to move beyond isolated data projects toward connected, enterprise-wide intelligence.

As our organization continues to evolve, our data council’s strategic work and valuable insights will be crucial in shaping the future of data-driven decision making and AI transformation at Microsoft.

Key takeaways

Here are some things to keep in mind as you contemplate forming a data council to help you manage and scale AI impacts responsibly at your own organization:

  • A data mesh strikes the balance enterprises have been chasing. By formalizing domain ownership while enforcing standards through shared platforms, you avoid both chaotic decentralization and slow, over-centralized control.
  • Governance is an accelerator when it’s automated and embedded. Using platforms like Microsoft Purview and Microsoft Fabric, governance shifts from a manual gatekeeping function to a built‑in capability that enables faster, trusted analytics and AI.
  • AI systems are only as strong as their discovery layer. A unified enterprise data catalog allows both people and AI agents to find, trust, and use data consistently—turning standards into operational speed.
  • Customer Zero turns theory into enterprise‑ready execution. By operating its own data and AI platforms at scale, Microsoft Digital provides real telemetry and practical feedback that directly shapes product readiness.
  • Building AI capability is a cultural effort, not just a technical one. Our data council’s focus on applied learning, certification, and real-world AI scenarios ensures data skills translate into durable business outcomes.
  • AI scale exposes the cost of fragmented data ownership. A data council cuts through silos by aligning priorities, resolving tradeoffs, and concentrating investment on the data assets that matter most for AI impact.
  • Shared metrics create shared ownership. Publishing data quality and AI‑readiness scores at the leadership level reinforces accountability and positions data as a core enterprise asset.

The post Harnessing AI: How a data council is powering our unified data strategy at Microsoft appeared first on Inside Track Blog.

]]>
23030
Microsoft CISO advice: The most important thing to know about securing AI http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-the-most-important-thing-to-know-about-securing-ai/ Thu, 02 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22863 Using AI comes with inherent risks. In a recent video, Yonatan Zunger, CVP and deputy CISO for Microsoft, suggests thinking about AI as a new intern will help you naturally take the right approach to AI security.  Zunger and his team focus on AI safety and security. They consider all the different ways anything involving […]

The post Microsoft CISO advice: The most important thing to know about securing AI appeared first on Inside Track Blog.

]]>
Using AI comes with inherent risks. In a recent video, Yonatan Zunger, CVP and deputy CISO for Microsoft, suggests thinking about AI as a new intern will help you naturally take the right approach to AI security. 

Zunger and his team focus on AI safety and security. They consider all the different ways anything involving working with AI can go wrong.

“An important thing to know about AI is that AI’s make mistakes,” Zunger says. “You already know how to work with systems that make mistakes, get tricked.”

Watch this video to see Yonatan Zunger discuss his advice for working with AI. (For a transcript, please view the video on YouTube: https://youtu.be/b1x6gDbSWVY. )

The post Microsoft CISO advice: The most important thing to know about securing AI appeared first on Inside Track Blog.

]]>
22863
Deploying Microsoft Baseline Security Mode at Microsoft: Our virtuous learning cycle http://approjects.co.za/?big=insidetrack/blog/deploying-microsoft-baseline-security-mode-at-microsoft-our-virtuous-learning-cycle/ Thu, 26 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22829 The enterprise security frontier isn’t just evolving. It’s accelerating beyond the limits of traditional security models. AI acceleration, cloud adoption, and rapid growth of enterprise apps have dramatically expanded the attack surface. Every new app introduces a new identity. Every identity carries permissions. Over time, those permissions accumulate, often without clear ownership or regular review. […]

The post Deploying Microsoft Baseline Security Mode at Microsoft: Our virtuous learning cycle appeared first on Inside Track Blog.

]]>
The enterprise security frontier isn’t just evolving. It’s accelerating beyond the limits of traditional security models.

AI acceleration, cloud adoption, and rapid growth of enterprise apps have dramatically expanded the attack surface. Every new app introduces a new identity. Every identity carries permissions. Over time, those permissions accumulate, often without clear ownership or regular review.

A photo of Ganti.

“An app is another form of identity. In a cloud-first, Zero Trust world, identity becomes the primary security perimeter, and access is governed by the principle of least privilege. Whether it is a user, an app, or an agent, when permissions are overly broad or elevated the blast radius expands dramatically, increasing risk exponentially.”

B. Ganti, principal architect, Microsoft Digital

Inside Microsoft Digital—the company’s IT organization—we recognized this early. Many of our highest‑risk security scenarios didn’t start with malware or phishing. They started with access. Specifically, apps running with permissions beyond what they required.

“An app is another form of identity,” says B. Ganti, principal architect in Microsoft Digital. “In a cloud-first, Zero Trust world, identity becomes the primary security perimeter, and access is governed by the principle of least privilege. Whether it is a user, an app, or an agent, when permissions are overly broad or elevated the blast radius expands dramatically, increasing risk exponentially.

Traditional security approaches such as periodic reviews, best‑practice guidance, and point‑in‑time hardening weren’t enough in an environment that changes daily. Configurations drift, new apps appear, and risk grows quietly in places that are hard to see at scale.

That reality forced a mindset shift internally here at Microsoft. Security couldn’t be optional. It couldn’t be advisory. And it couldn’t be static.

Our team operates one of the largest enterprise environments in the world, with tens of thousands of apps and a culture built on self‑service and autonomy. That scale drives innovation, but it also amplifies risk.

Our application identities became one of the most complex governance challenges we faced. Our ownership wasn’t always clear. Our permissions were often granted broadly to avoid disruption. And once approved, access rarely came under scrutiny again.

“As a self‑service organization, we empower people to move fast,” Ganti says. “But that also means apps get created, permissions get granted, and not everyone always remembers why.”

The rise of AI‑powered apps and agents—often requiring access to large volumes of data—increased our risk further.

Photo of Fielder

“We’re using Microsoft Baseline Security Mode to move security from guidance to enforcement. It establishes secure‑by‑default configurations that scale across our environment, so teams can innovate quickly without inheriting unnecessary risk.”

Brian Fielder, vice president, Microsoft Digital

We needed a system to reduce that risk systematically, not one app at a time.

Microsoft Baseline Security Mode (BSM) became that system—a prescriptive, enforceable baseline that defines what “secure” means and keeps it that way.

“We’re using Microsoft Baseline Security Mode to move security from guidance to enforcement,” says Brian Fielder, vice president of Microsoft Digital. “It establishes secure‑by‑default configurations that scale across our environment, so teams can innovate quickly without inheriting unnecessary risk.”

Defining Microsoft Baseline Security Mode

BSM is more than just a checklist of recommended settings. It’s an enforced security baseline built directly into the Microsoft 365 admin center, designed to reduce attack surface by default across core Microsoft 365 workloads.

It was developed and then deployed internally at Microsoft, with our team in Microsoft Digital serving as a close design and deployment partner throughout the process.

A photo of Wood.

“The settings in the Microsoft Baseline Security Mode were informed by years of experience in running our planet-scale services, and by analyzing historical security incidents across Microsoft to harden the security posture of tenants. The team identified concrete security settings that would prevent or significantly reduce known security vulnerabilities.”

Adriana Wood, principal product manager, Microsoft 365 security

At a technical level, BSM establishes a minimum required security posture by applying Microsoft‑managed policies and configuration states across services including Exchange Online, SharePoint Online, OneDrive, Teams, and Entra ID. The focus is on eliminating common misconfigurations, rather than theoretical or edge‑case risks.

“The settings in the Microsoft Baseline Security Mode were informed by years of experience in running our planet-scale services, and by analyzing historical security incidents across Microsoft to harden the security posture of tenants,” says Adriana Wood, a principal product manager for Microsoft 365 security. “The team identified concrete security settings that would prevent or significantly reduce known security vulnerabilities. The resulting mitigation controls were implemented and validated in Microsoft’s enterprise tenant, with Microsoft Digital evaluating operational impact, rollout characteristics, and failure modes before making it more broadly available to our customers.”

Legacy baselines rely on documentation and manual implementation. Administrators interpret guidance, apply settings where feasible, and revisit them periodically. In dynamic cloud environments, that model breaks down fast. Configurations drift, exceptions accumulate, and security degrades.

A photo of Bunge.

“Before enforcement, administrators can use reporting and simulation tools to understand how a baseline will affect users, apps, and workflows. That visibility allows teams to identify noncompliant assets, prioritize remediation by risk, and avoid unexpected disruptions.”

Keith Bunge, principal software engineer, Microsoft Digital

BSM replaces that approach with policy‑driven enforcement.

Now our controls are applied consistently across the tenant and continuously validated. When our configurations fall out of compliance, our risk surfaces immediately—it’s not discovered months later in an audit. The model is simple: get clean, stay clean.

Another key capability of BSM is impact awareness.

“Before enforcement, administrators can use reporting and simulation tools to understand how a baseline will affect users, apps, and workflows,” says Keith Bunge, a principal software engineer in Microsoft Digital. “That visibility allows teams to identify noncompliant assets, prioritize remediation by risk, and avoid unexpected disruptions. Our team in Microsoft Digital partnered closely with the product group to ensure these capabilities were practical for real enterprise deployments, not just greenfield environments.”

BSM is also not static.

The baseline evolves on a regular cadence to reflect changes in the threat landscape, new Microsoft 365 capabilities, and lessons learned from operating at scale.

From our perspective, BSM is not just a feature. It’s a security operating model. It shifts the default from “secure if configured correctly” to “secure by default.” Security decisions move out of individual teams and into a consistent, centrally enforced baseline. The question is no longer whether a control should be applied, but whether an exception is truly necessary—and how the associated risk will be mitigated.

That shift is what makes BSM sustainable at scale. And it’s why apps—where identities, permissions, and data access converge—became the next focus area for us in Microsoft Digital.

Addressing apps and high-risk surfaces

When we evaluated risk across our environment, one pattern was clear: Our apps represented both our most concentrated and least governed attack surface.

Apps are identities. They authenticate. They’re granted permissions. And unlike human users, they often operate continuously, without reassessment or visibility.

In a large, self‑service environment like ours, apps are created constantly by engineering teams, business groups, and automation workflows. Over time, many of those apps could accumulate permissions beyond what they actually needed, particularly within our Microsoft Graph. Our delegated permissions were especially risky, because they allow apps to act on our employees’ behalf at machine speed across massive data sets.

“As a user, I might not know where all my data lives,” Ganti says. “But an app with delegated permissions doesn’t have that limitation. It can search everything, everywhere, all at once.”

The challenge wasn’t just volume—it was inconsistency.

Our ownership was often unclear. Our permission reviews were infrequent or manual. And once we granted elevated access, we had few systemic controls in place requiring it to be revisited.

Microsoft Baseline Security Mode addresses this directly by treating apps explicitly as identities that must conform to least‑privilege principles.

We started with visibility. We inventoried apps and analyzed permission scopes, authentication models, and potential blast radius. Our apps with broad Microsoft Graph permissions, access to large volumes of unstructured data, or unclear ownership were prioritized. In some cases, we reduced permissions to more granular scopes. In others, we rearchitected apps to use delegated access more safely—or we retired them altogether.

This work was intentionally structured as a burndown, not a one‑time cleanup.

Removing our excess permissions was only half the equation. Preventing them from coming back was just as critical. BSM introduced guardrails earlier in the app lifecycle, to surface and control elevated permission requests before they reached production. New or updated apps requesting high‑risk permissions now trigger consistent review, and in many cases are blocked outright unless they meet strict criteria.

Moving from ‘get clean’ to ‘stay clean’

Reducing risk once is hard. Keeping it reduced is harder.

After our initial application burndown, we quickly learned that cleanup alone wouldn’t scale. Even as we reduced permissions and remediated high‑risk apps, new apps continued to appear. Existing apps evolved, teams changed, and without structural controls, the same risks would inevitably return.

BSM enabled us to shift from remediation to sustainability.

It started with visibility.

We needed a reliable way to detect when apps drifted out of compliance. That meant continuously monitoring permission changes, new consent grants, and scope expansions across our tenant. Instead of periodic reviews, we moved to continuous validation tied directly to the baseline.

Next came risk‑based prioritization.

Not every noncompliance carries equal impact. Our apps with broad Microsoft Graph permissions, access to large volumes of data, or unclear ownership were surfaced first. This ensured our security teams focused on material risk, rather than treating every deviation as equal.

It was equally important for us to control how new risk entered the system.

BSM introduces guardrails earlier in the application lifecycle. Our elevated permission requests are surfaced sooner and reviewed more consistently. In many cases, high‑risk permissions are blocked by default unless clear justification and mitigation are in place. Known‑bad patterns are stopped before our teams build or update apps.

Over time, this enforcement model fundamentally changed the operating posture.

Instead of recurring cleanup campaigns, we moved to continuous alignment. Our environment stays closer to the baseline by default. Our deviations are treated as exceptions that require explicit action, not silent drift.

This “stay clean” capability also reduced operational overhead.

As enforcement and validation moved into Microsoft Baseline Security Mode, we retired custom scripts, dashboards, and manual review processes that were difficult to maintain at scale. Our baseline became the source of truth for application security posture, not a snapshot taken after the fact.

Most importantly, we proved that BSM could scale.

“This isn’t limited to Microsoft 365. This is Microsoft, and it expands over time as more services come into scope.”

Jeff McDowell, principal program manager, OneDrive and SharePoint product group

By combining continuous validation, risk‑based prioritization, and enforced guardrails, we established a repeatable model for sustaining security improvements over time.

That model now serves as our foundation for extending BSM to additional workloads and security surfaces across the enterprise.

“This isn’t limited to Microsoft 365,” says Jeff McDowell, a principal program manager in the OneDrive and SharePoint product group. “This is Microsoft, and it expands over time as more services come into scope.”

Operationalizing Microsoft Baseline Security Mode

Defining a baseline is only the first step. Making it work day‑to‑day is the real challenge.

For us in Microsoft Digital, operationalizing BSM meant embedding it directly into how we run security. That required clear ownership, repeatable processes, and tight integration with our existing workflows.

Governance came first.

BSM creates a clear line between what is centrally enforced and what individual teams can influence. The baseline is owned and managed centrally to ensure consistency across the tenant. Our application owners and engineering teams still make design decisions, but within defined guardrails aligned to enterprise risk tolerance.

This clarity reduces friction.

Instead of debating security settings app by app, our teams start from a shared default. Our security conversations shift away from “Can we make an exception?” to “How do we meet the baseline with the least disruption?”

Operationally, BSM is integrated into our application lifecycle.

New apps are evaluated against baseline requirements early, before permissions are broadly granted or dependencies are established. Changes to existing apps, such as new permission requests or expanded scopes, are surfaced automatically and reviewed in context, rather than discovered months later during audits.

In an environment where apps are constantly being created, updated, and retired, automation is essential. Without policy‑driven enforcement, our security teams would be managing a perpetual backlog of reviews. BSM allows us to focus on true exceptions instead of revalidating the baseline itself.

That baseline is also embedded into our ongoing operations.

Our security posture is monitored continuously, not through periodic snapshots. When our configurations drift or new risks appear, we identify them early and address them while the blast radius is still small. Over time, this reduces both our operational effort and incident response overhead.

Perhaps our most important change was cultural.

BSM normalizes the idea that security defaults are foundational. Our teams still innovate and move quickly—but they do so in an environment where secure is expected, enforced, and sustained.

Embracing the feedback loop as Customer Zero

From the start, our team in Microsoft Digital deployed Microsoft Baseline Security Mode as Customer Zero: We applied early versions in our live, large‑scale enterprise environment, where we fed our real‑world learnings back to the product group. That feedback loop became central to how the platform evolved.

Running BSM at Microsoft scale quickly exposed challenges that don’t appear in smaller tenants. Visibility was one of the first. With thousands of apps and constantly changing permissions, it was difficult to pinpoint which apps violated least‑privilege principles and where security teams should focus first.

Those gaps directly shaped the product. Reporting and analytics were refined to better surface elevated permissions, risky scopes, and noncompliant apps, helping teams move from investigation to action more quickly.

Scalability was another critical lesson.

Controls that worked for dozens of apps didn’t automatically work for thousands. Our team needed policies that were opinionated, enforceable, and operationally sustainable without constant adjustment. That pushed BSM toward clearer defaults and stronger enforcement boundaries.

“What made the collaboration work is that Microsoft Digital was deploying this in a real tenant with real consequences,” Wood says. “Their feedback helped us understand what enterprises actually need to adopt these controls successfully, not just what looks good on paper.”

Over time, this became a virtuous cycle. Our team surfaced friction and risk through deployment. The product group translated those insights into product improvements. We then adopted those same improvements to replace custom tooling and manual processes.

For customers, this matters. The controls in BSM are shaped by operational reality, tested under scale and refined so other organizations don’t have to learn the same lessons the hard way.

What’s next for Microsoft Baseline Security Mode

Future iterations of BSM will expand coverage beyond traditional collaboration services to additional platforms and services, while maintaining the same opinionated approach. The goal is not to restrict environments indiscriminately, but to ensure new capabilities are introduced with security baked in from the start.

As compliance requirements grow more complex and more global, organizations need a consistent, defensible security baseline. BSM provides a Microsoft‑managed standard informed by real‑world attack patterns and enterprise deployment realities.

Controls evolve. Scope expands. Feedback loops remain active. As new risks emerge, the baseline adapts, without requiring organizations to redefine their security posture from scratch.

It’s a foundation designed to support whatever comes next.

Key takeaways

If you’re ready to strengthen your organization’s security posture with Microsoft Baseline Security Mode, consider these immediate actions:

  • Establish clear ownership. Assign responsibility for baseline security management to ensure consistency and accountability.
  • Implement repeatable processes. Develop standardized procedures to evaluate and enforce baseline requirements throughout the app lifecycle.
  • Integrate with existing workflows. Embed security controls into daily operations to reduce friction and streamline compliance.
  • Prioritize automation and monitoring. Use automated enforcementand continuous validation for early risk detection and response.
  • Foster a security-first culture. Normalize secure defaults and encourage teams to innovate within defined guardrails.
  • Design for evolution. Design your baseline to adapt as new services, platforms, and compliance needs arise.

The post Deploying Microsoft Baseline Security Mode at Microsoft: Our virtuous learning cycle appeared first on Inside Track Blog.

]]>
22829
Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI http://approjects.co.za/?big=insidetrack/blog/accelerating-transformation-how-were-reshaping-microsoft-with-continuous-improvement-and-ai/ Thu, 26 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20297 Technology companies are really people companies. In an age of rapidly advancing AI, losing sight of this reality leads to an overemphasis on new tools while neglecting opportunities for the transformational change that AI offers. Moving forward, the winners will be the companies that prioritize technological and operational excellence. Microsoft Digital, our company’s IT organization, […]

The post Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI appeared first on Inside Track Blog.

]]>
Technology companies are really people companies. In an age of rapidly advancing AI, losing sight of this reality leads to an overemphasis on new tools while neglecting opportunities for the transformational change that AI offers.

Moving forward, the winners will be the companies that prioritize technological and operational excellence. Microsoft Digital, our company’s IT organization, is seizing this moment by reinventing processes for agentic workflows powered by continuous improvement (CI).

We believe that AI-powered agents, Microsoft 365 Copilot, and human ambition are the key ingredients for unlocking opportunity across every industry.

A photo of Laves.

“Continuous improvement is a natural, formal extension of our culture that applies rigor, structure, and methodology to enacting a growth mindset through understanding waste and opportunities for optimization.”

David Laves, director of business programs, Microsoft Digital

By combining our AI capabilities with continuous improvement, we’re executing initiatives that increase our productivity and improve our performance. We’re forging a new path for how companies operate in the era of AI.

Welcome to the age of AI-empowered continuous improvement.

Our vision for continuous improvement, turbo-charged by AI

At Microsoft Digital, we’re embracing continuous improvement to unlock greater operational excellence and better employee experiences.

“One of the main tenets of our culture at Microsoft is a growth mindset, and that involves experimentation and curiosity,” says David Laves, director of business programs within Microsoft Digital. “Continuous improvement is a natural, formal extension of our culture that applies rigor, structure, and methodology to enacting a growth mindset through understanding waste and opportunities for optimization.”

Our capacity to drive process improvements has been crucial to our AI transformation as a company. We’ve adopted a “CI before AI” approach to ensure that we don’t end up automating inefficient processes. By engaging in activities that focus on continuous improvement, our teams can better identify which problems to address with AI and prioritize meeting customer needs.

“Continuous improvement is really about understanding your business, its needs, and where you can find value,” says Matt Hansen, a director of continuous improvement at Microsoft. “It gives us the language to scale our efforts out across everything we do.”

This process isn’t just another way to enable AI. In fact, AI is essential to enabling continuous improvement itself.

A photo of Campbell.

“When leaders stay actively engaged and partner through these Centers of Excellence, we can create alignment, accelerate decisions, and ensure both CI and AI help to deliver measurable business outcomes.”

Don Campbell, senior director, Microsoft Digital

Operationalizing continuous improvement and AI

Operationalizing continuous improvement and AI enablement is a leadership imperative at Microsoft, and one that doesn’t just happen organically. As an organization, we are deliberate about turning business strategy into measurable outcomes through clear sponsorship, disciplined prioritization, the right resourcing, and sustained investment in change management and employee skilling.

“The difference between strategy and real business impact is execution,” says Don Campbell, a senior director in Microsoft Digital. “That execution requires strong leadership sponsorship and clearly designed continuous improvement efforts and AI Centers of Excellence (CoEs), which translate business strategy into operational reality. When leaders stay actively engaged and partner through these CoEs, we can create alignment, accelerate decisions, and ensure both CI and AI help to deliver measurable business outcomes.”

To support leadership’s vision, we’ve put organizational resources in place to manage our continuous improvement investments, guide practices, and support teams. There’s an overarching continuous improvement CoE within Microsoft Digital, which works in close partnership with the AI CoEs, forming an integrated model which connects enterprise priorities with frontline execution.

Together, these CoEs establish shared standards, provide clarity on where to invest, and help us move faster with confidence, turning ambition into sustained business impact.

A photo of West.

“Continuous improvement is about process, but it’s also about people.”

Becky West, lead, Continuous Improvement Center of Excellence, Microsoft Digital

Continuous improvement and people

As we build out the organizational structures that underpin our investment in continuous improvement, we’re approaching the people side of change with intention.

Currently, we’re undertaking skilling efforts and communicating with every employee about how their role fits into core continuous improvement tools, including bowler cards, Gemba walks, Kaizen events, and monthly business reviews. We’re also demonstrating how “CI + AI” is a powerful combination.

The roadmap is there, the structure is in place, and we’re already seeing progress.

“Continuous improvement is about process, but it’s also about people,” says Becky West, lead for the Continuous Improvement CoE within Microsoft Digital. “A guiding hand like the Continuous Improvement CoE is how you make sure those two components align.”

Three Microsoft Digital continuous improvement initiatives

As we negotiate the early days of the company’s continuous improvement journey, Microsoft Digital is becoming a proving ground for the larger CI framework we want to deploy across the company. Our teams are spearheading projects to bring this framework to diverse functions like asset management, incident response (with a designated responsible individual), and third-party software licensing.

Enterprise IT asset management

Microsoft Digital’s Enterprise IT Asset Management team oversees the 1.6 million devices that power the company, from servers and IoT devices to labs, networks, and 800,000 employee endpoints. Safeguarding this vast landscape is critical to enterprise cybersecurity.

Three security pillars form the foundation of our security efforts: protect, detect, and respond. All of these depend on a complete, accurate device inventory.

Unified visibility enables proactive protection through enforced security controls, improves detection by spotting anomalies and misconfigurations, and accelerates responses by reducing investigation and remediation time. Without this foundation, security teams lack the precision to execute effectively.

To reach the goal of a unified inventory, the team initiated a continuous improvement initiative to build a consolidated source of truth for Microsoft Digital IT assets. Grounded in the principle of “progress over perfection,” the team initially narrowed its focus to Microsoft Lab Services (MLS) and IoT devices, with a vision to eventually expand to networks, employee devices, conference rooms, and printers. The ultimate goal is to move toward a truly comprehensive inventory.

This foundation will not only enhance security but also deliver enterprise-wide value through consistent policy enforcement, more resilient infrastructure, and comprehensive lifecycle management. By applying continuous improvement processes to help prioritize high-impact opportunities and using AI to accelerate outcomes, the program is enhancing Microsoft’s operational excellence and security posture.

“It’s better to do step A than wait until you’re ready to do steps A, B, C, and D,” says Aniruddha Das, a principal PM in Microsoft Digital.

As the team progressed from Gemba walks to Kaizen events under the guidance of the Continuous Improvement CoE, they dug deeper into areas of waste. Then they identified potential actions, breaking them down into “value-add,” “non-value-add-but-essential,” and “non-value-add.”

A photo of Ashwin Kaul

“For every action item, we were always asking ourselves how we could make these things better through AI. We’re looking for ways to expedite our core outcomes with minimal human involvement.”

Ashwin Kaul, senior product manager, Microsoft Digital

This exercise helped them prioritize their activities and land on a starting point: A device security index that would provide an overview of our hardware environment’s security posture. Essentially, it would represent a list of device security statuses.

The team identified distinct improvement areas for IoT and Microsoft Lab Services (MLS) devices. For IoT devices, they needed to build the inventory from the ground up. MLS already had a fairly complete inventory of devices, so the team set a goal to improve data quality. Although each of these challenges is different, they’re excellent opportunities for AI-empowered continuous improvement.

Now that the project is underway, the team plans to use an AI agent to automate device registration for IoT devices, which currently relies on manually uploaded spreadsheets. It’s a prime example how streamlining a process with continuous improvement enables AI to automate and accelerate our work.

On the MLS side, the team is creating an AI-driven normalization tool to automate the de-duplication and correction of inaccuracies in device data. The goal is to get from less than 50% data quality to 100%, dramatically improving our security posture through greater accuracy.

“For every action item, we’re always asking ourselves how we can make these things better through AI,” says Ashwin Kaul, a senior product manager within Microsoft Digital. “We’re looking for ways to expedite our core outcomes with minimal human involvement.”

Continuously improving the designated responsible individual experience

On the Digital Workspace team, designated responsible individuals (DRIs) are in charge of maintaining the health of our production systems. When technical emergencies arise, they’re the rapid-response point people who take the lead.

A photo of Ajeya Kumar

“We asked ourselves, ‘How can AI elevate the designated responsible individual (DRI) experience to the next level?’”

Ajeya Kumar, principal software engineer, Microsoft Digital

That process itself can be incredibly stressful, and time is of the essence. When every moment counts, efficiency is key. Meanwhile, a big part of a DRI’s work is just finding out what’s gone wrong so they can fix the incident.

But their job isn’t just about crisis management. When there are no active incidents, they work on engineering enhancements to improve the efficiency of production systems and clear backlog projects.

There’s also a handover process that takes place when one DRI finishes their rotation and another goes on-call. That involves a report about any incidents that have occurred, active issues, actions taken, key metrics, and other important information.

With these two priorities in mind, our Digital Workspace team initiated a continuous improvement process review. Their Gemba walk provided a crucial starting point.

“The planning stage is all about figuring out what the process is, what it should be, and what we can do to improve it,” says Ajeya Kumar, a principal software engineer on the Digital Workspace team within Microsoft Digital. “We asked ourselves, ‘How can AI elevate the designated responsible individual (DRI) experience to the next level?’”

Collectively, the team decided to tackle these challenges with a multifunctional AI agent they call the Smart DRI Agent. This agent’s primary role would be synthesizing and presenting information to its human counterparts to help them save time in context-heavy situations.

The AI elements that the team has planned can be broken out into the following capabilities:

  • Text summarization: Going through logs and identifying key insights.
  • Data correlation: Tracking and collating error logs.
  • Automation: Updating the status of issues, keeping abreast of communications, and providing point-in-time, daily, and weekly summaries of system health.
  • Identifying patterns: Building troubleshooting guides based on frequency patterns.

The Smart DRI Agent is already in its pilot phase and producing results. It conducts four main activities:

  • AI-generated summaries of DRI actions.
  • Proactive notifications with AI-generated insights.
  • Chat support to assist with all kinds of DRI queries.
  • AI-generated handover reports.

“The continuous improvement framework that enables these pieces is the key to unlocking value,” says Aizaz Mohammad, principal software engineering manager on the Digital Workspace team. “It may seem process-heavy, but once you work through it, you’ll see the value.”

That value is apparent in their results.

In the first 30 days of the Smart DRI Agent’s pilot, there were 301 incidents, and the agent provided insights on 101 of them. That led to an approximate 100 hours of time savings for DRIs and a 40% improvement in our key network performance metric.

Third-party software license audits

Within Microsoft Digital, the Tenant Integration and Management team is responsible for a range of services, including third-party software licensing. This space is all about managing liability from both a security operations and an auditing perspective.

A photo of Hovhannisyan.

“It takes a tremendous amount of data and traversals through multiple sources to get us to the actionable data we need. The goal for this project is to reduce that time to increase operational efficiencies.”

Anahit Hovhannisyan, principal group product manager, Microsoft Digital

Without the proper security insights, the company could find itself with risks associated with third-party software vulnerabilities. And without thorough auditing, we might experience license overuse and contractual issues that can lead to waste or expensive license reconciliations.

“It takes a tremendous amount of data and traversals through multiple sources to get us to the actionable data we need,” says Anahit Hovhannisyan, a principal group product manager within Microsoft Digital. “The goal for this project is to reduce that time to increase operational efficiencies.”

A photo of Kathren Korsky

“It’s tough to be honest about what isn’t working, because it ties into people’s personal value and worth, but it’s essential to the process.”

Kathren Korsky, team lead, Software Licensing, Microsoft Digital

The team decided to target the auditing process first. Currently, the software licensing team performs audits manually by looking at entitlements, contracts, purchase orders, and more while liaising with suppliers and our Compliance and Legal teams. That’s incredibly time-consuming.

During the software licensing team’s planning phase, they developed an ambitious goal of reducing the time to insights on third-party software license data from 154 days down to 15 minutes. During their continuous improvement Kaizen event, the team uncovered opportunities for AI-powered process improvements that eliminate waste.

“It required a lot of courage as we were identifying waste,” says Kathren Korsky, Software Licensing team lead within Microsoft Digital. “People are very invested. It’s tough to be honest about what isn’t working, because it ties into people’s personal value and worth, but it’s essential to the process.”

Now, they’re building and implementing solutions, including an AI and data platform that provides business intelligence with custom reporting abilities, an AI agent that provides audit support and ticket creation, and another that automatically generates audit reports. The team has been using Azure Foundry and Azure AI services to create their agents because these tools have the flexibility to switch between different models and fine-tune their parameters.

As these agents emerge, they’ll take the most tedious and error-prone aspects of the process out of human auditors’ hands, freeing them up to focus on solving problems, not endlessly searching for them.

Realizing continuous improvement at scale

These are just a small selection of the many continuous improvement initiatives underway within Microsoft Digital and the company as a whole.

“What continuous improvement gives us is the macro vision and the micro actions we can do to accomplish our goals.”

Kirkland Barret, senior principal PM manager, Microsoft Digital

At Microsoft, most of our continuous improvement initiatives are in their initial stages. As they progress through the measurement and adjustment phases, two benefits will emerge.

First, we’ll iterate and improve the value that each individual initiative provides. Second, we’ll continue to build our discipline and cultural maturity around a growth mindset we’re operationalizing through continuous improvement.

“What continuous improvement gives us is the macro vision and the micro actions we can do to accomplish our goals,” says Kirkland Barrett, senior principal PM manager for Employee Experience in Microsoft Digital. “It’s about knowing our objectives, identifying upstream root causes, and rippling them throughout a mechanism of progress.”

Key takeaways

These tips for implementing a continuous improvement framework come from our own experiences at Microsoft Digital:

  • Be inclusive: Have the right subject matter experts at the table from the start. Sponsors need to be present as well.
  • Cultivate maturity and transparency: Objective analysis about how things are going requires honesty.
  • Sponsorship matters: Make sure you have sponsorship at the highest levels. This is a cultural change, and leadership is the core of culture.
  • No half-measures: If you’re going to identify opportunities for continuous improvement, commit to having budget and resources in place.
  • Process, then technology: Focus on what you need to simplify processes first, then apply AI. This will keep you from automating waste and inefficiency into your operations.

The post Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI appeared first on Inside Track Blog.

]]>
20297
Mapping the Microsoft approach to accessibility in the world of AI http://approjects.co.za/?big=insidetrack/blog/mapping-the-microsoft-approach-to-accessibility-in-the-world-of-ai/ Thu, 19 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22756 More than 1 billion people worldwide have a disability, and 83 percent of people will experience a disability during their working age. As AI transforms how we build and experience technology, accessibility has to be built in from the start. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are […]

The post Mapping the Microsoft approach to accessibility in the world of AI appeared first on Inside Track Blog.

]]>
More than 1 billion people worldwide have a disability, and 83 percent of people will experience a disability during their working age.

As AI transforms how we build and experience technology, accessibility has to be built in from the start.

Designing with and for people with disabilities isn’t optional—it’s fundamental to building technology that works for everyone and to building trust at scale. And yet today, about96% of websites are still inaccessible.

At Microsoft, we’re committed to creating accessible products and services—designed with and for the disability community—that benefit everyone.

Our “shift left” approach to software production—which involves moving quality-assurance, testing, and accessibility checks to earlier in the development lifecycle—means that implementing assistive features and tools is a high priority for Microsoft, rather than a late-stage addition.

And with the rise in importance of AI tools and products, paying close attention to accessibility standards and building these key capabilities into game-changing tech like Microsoft 365 Copilot is a crucial part of our mission here in Microsoft Digital, the company’s IT organization.

A photo of Allen.

“After my accident, I became immediately reliant on accessible technology. Because I worked in tech, I could leverage accessibility features and assistive technologies to continue doing my job. It was literally a lifeline for me.”

Laurie Allen, accessibility technology evangelist, Microsoft

Evangelizing for accessibility

Laurie Allen is one person who knows first-hand the importance of accessibility in enterprise software. A little more than a decade ago, she experienced a spinal cord injury and became a quadriplegic.

Today, Allen works as an accessibility technology evangelist at Microsoft. Every day, she relies on assistive digital technologies to help her be successful in her role—which involves ensuring that our software products are accessible to everyone.

“After my accident, I became immediately reliant on accessible technology,” Allen says. “Because I worked in tech, I could leverage accessibility features and assistive technologies to continue doing my job. It was literally a lifeline for me during that transitionary phase, because my job was the one thing about my life that didn’t dramatically change as a result of the accident.”

The following graphic shows how widespread disability is around the globe: 

Shifting left for inclusivity

At Microsoft, our accessibility strategy includes such disability categories as mobility, vision, hearing, cognition, and learning—because accessibility empowers everyone.

A photo of Garg.

“We view accessibility as a quality of our software, not simply a feature. Like with security and privacy, we prioritize accessibility to ensure that people can effectively perceive and operate our products and services, delivering an inclusive experience for everyone.”

Ankur Garg, accessibility program manager, Microsoft Digital

We begin with the concept of “shift left,” which in this context means incorporating accessibility principles from the project’s outset, instead of waiting until a product is already built.

This strategy mirrors our approach in other key trust domains, such as security and privacy.

“We view accessibility as a quality of our software, not simply a feature,” says Ankur Garg, an accessibility program manager in Microsoft Digital. “Like with security and privacy, we prioritize accessibility to ensure that people can effectively perceive and operate our products and services, delivering an inclusive experience for everyone.”

Here in Microsoft Digital, that manifests as treating accessibility as a core requirement validated through rigorous internal testing of AI agents and embedding standards and inclusive design early in every tool’s development life cycle. We also use internal AI tools to streamline guidance and testing before expanding those practices across the company.  

Accessibility challenges in the age of AI

Technology is moving fast, especially with the advent of AI-powered tools. It’s easier than ever for companies and individuals to quickly generate and publish an app, website, or other digital product.

That means it’s also easier than ever before to create inaccessible software. It’s important to remember that much of the data that generative AI models have been trained on includes websites and apps that were built without considering accessibility guidelines.

A photo of Hirt.

“We want people with disabilities to be represented and see themselves in the technology we’re producing. We work with our AI models to make sure they have disability data in their training sets, so that the final product will reflect these values.”

Alli Hirt, director of accessibility engineering, Microsoft

As a result, we’ve found that many AI code-generation tools and models produce code that by default fail to meet Microsoft’s high standards for accessibility.

“We want people with disabilities to be represented and see themselves in the technology we’re producing,” says Alli Hirt, a director of accessibility engineering at Microsoft. “We work with our AI models to make sure they have disability data in their training sets, so that the final product will reflect these values.”

When we’re developing AI-driven products like Microsoft 365 Copilot, the tool must have comprehensive knowledge of different disabilities and be able to give appropriate, contextual help.

“Let’s say I tell Copilot, ‘I have a mobility disability; what software tools can I use?’” Allen says. “Copilot must recognize what a mobility disability is and identify which tools will support me. That’s the data representation we need in our AI models.”

Allen noted that sensitivity and bias are also big factors when creating these kinds of tools.

“Copilot should not respond with, ‘I’m sorry you have a disability,’” she says. “That’s the type of bias we’re working to train out of the models.”

Accessibility as a core commitment

When Satya Nadella became Microsoft CEO in 2014, he redirected the core mission of the company. The new vision was simple: To empower every person and every organization on the planet to achieve more. And accessibility is a core part of that mission.

“At Microsoft, accessibility is in our DNA. It’s who we are as a company.”

Laurie Allen, accessibility technology evangelist, Microsoft

Meeting global accessibility standards is our starting point. For example, the hub-and-spoke business model of the Accessibility Team helps ensure that accessibility is everyone’s responsibility.

The Microsoft Corporate, External, and Legal Affairs (CELA) group oversees accessibility across the company, helping products align with internationally recognized accessibility standards, such as Web Content Accessibility Guidelines (WCAG) and EN 301 549. These standards ensure that digital content, websites, and apps produced today are designed with accessibility in mind.

Understanding how products and services align to key accessibility standards and requirements is an important step in providing inclusive and accessible experiences.

“An organization’s accessibility program succeeds when it’s a priority at every level of the organization, starting with senior leadership,” Allen says. “At Microsoft, accessibility is in our DNA. It’s who we are as a company.”

Presenting content in a multimodal way

Here in Microsoft Digital, we embrace software products that provide our employees with a multimodal approach in presenting content. This means using more than one sense at the same time, like seeing, listening, reading, and speaking. This makes our products accessible to a diverse array of users, including people who learn and work in different ways. It lets our employees customize the way that works best for them.

“Seeing a visually impaired colleague demonstrate how he works—listening to a wiki being read at a speed that I could never follow—showed me exactly why accessibility is needed. It’s not just about being inclusive or compassionate; it’s a requirement for people to do their jobs.”

Eman Shaheen, principal PM lead, Microsoft Digital

For example, someone may not have a diagnosed disability, but they might be a better auditory learner than a visual learner.

This reflects what Eman Shaheen, a principal PM lead in Microsoft Digital, learned from a team member when observing how he used assistive technologies.

“Seeing a visually impaired colleague demonstrate how he works—listening to a wiki being read at a speed I couldn’t even follow—showed exactly why accessibility is needed,” Shaheen says. “It’s not just about being inclusive or compassionate; it’s a requirement for people to do their jobs.”

Here are some examples of multimodal accessibility capabilities offered by Microsoft 365 Copilot that are designed to support diverse user requirements:

Vision

  • Works with screen readers
  • Generates alt text for images
  • Suggests accessible layouts, textual contrast, and consistent structure in documents and slides

Hearing

  • Provides real-time meeting Q&A
  • Produces meeting recaps across multiple languages
  • Summarizes lengthy or fast-moving chats to aid comprehension

Cognitive and neurodivergent (ADHD, dyslexia, autism, executive function)

  • Simplifies complex language
  • Supplies task breakdowns and next-steps guidance
  • Offers tone assistance to help with understanding communication nuances

Mobility

  • Provides voice-driven productivity tools, such as speech to text creation
  • Reduces fine‑motor effort by automating lists, tables, and drafts
  • Supports meeting recordings to help compile notes and action items

Speech and communication

  • Drafts and rewrites content for users needing expressive support
  • Refines tone for clarity and empathy in written communication

Learning

  • Summarizes long content to reduce reading burden
  • Organizes notes into structured content

Mental health and fatigue

  • Assists with communication when cognitive energy is low
  • Provides adaptive communication assistance to help users express themselves confidently

How we demonstrate our accessibility vision

Here at Microsoft, we developed a strategic partnership with ServiceNow over the last five years. The two companies work together to accelerate digital transformation for our enterprise and government customers.

Through this partnership, we use the ServiceNow platform for internal helpdesk and ServiceDesk process automation, IT asset management, and integrated risk management.

A photo of Mazhar.

“The biggest shift happened once ServiceNow started feeling the same operational pain we felt. That’s when they began fixing accessibility issues proactively, which changed everything.”

Sherif Mazhar, principal product manager, Microsoft Digital

As part of this process, we uncovered 1,800 accessibility bugs (including 1,200 that were rated as high severity) in the platform—in our first assessment. By contrast, our most recent review found just 24 accessibility-related issues.

“The biggest shift happened once ServiceNow started feeling the same operational pain we felt,” says Sherif Mazhar, a principal product manager in Microsoft Digital, who oversees the company’s relationship with ServiceNow. “That’s when they began fixing accessibility issues proactively, which changed everything.”

The next major step for us is ensuring our ServiceNow platform updates aligns to WCAG 2.2 accessibility standards which will require reworking older versions of our products. However, doing this work helps us maintain momentum toward a world of more inclusive enterprise software in all lines of business and for all Microsoft customers.

What’s next in accessibility

Digital accessibility work is never done.

As new software and hardware are introduced, user needs and accessibility standards change and grow. At Microsoft, we are committed to making accessibility easier for everyone.

“Right now, we’re making sure every AI agent across Microsoft is tested with assistive technologies—like screen readers and keyboard navigation—to guarantee that the outputs are accessible and compliant,” Garg says.

This “shift left” mentality at Microsoft is ultimately about putting people first. It means that no one should have to wait for a late fix to be able to do their work, or simply to belong.

By embedding accessibility standards into product planning, instead of tacking it on as an afterthought just before (or even after) product launch, we’re helping ensure that these digital experiences will include everyone from day one.

“We may compete on products, especially in AI, but accessibility is a shared mission,” Allen says. “When the industry collaborates on inclusive technology, everyone wins.”

Key takeaways

Here are some tips to keep in mind as you consider your own accessibility strategy in a world of increasingly AI-driven technology:

  • Start with leadership. Championing accessibility from the C-suite signals that this is a top organizational priority.
  • Raise awareness with training. Set up employee learning opportunities regarding accessibility in AI tools and encourage everyone to take part.
  • Design with inclusivity in mind from day one (“shift left”). Incorporate accessibility from the beginning of the software creation process to make sure it isn’t lost in the shuffle of trying to ship a product on time.
  • Think inclusively. Run usability tests with people with lived experience
  • Treat accessibility as an ongoing practice. Digital accessibility work is never finished; document strategies and share your team’s learnings to keep improving iteratively as an organization.

The post Mapping the Microsoft approach to accessibility in the world of AI appeared first on Inside Track Blog.

]]>
22756
Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success http://approjects.co.za/?big=insidetrack/blog/deploying-the-employee-self-service-agent-our-blueprint-for-enterprise-scale-success/ Thu, 12 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22492 The case for AI in employee assistance The advent of generative AI tools and agents has been a game changer for the modern workplace at Microsoft. And one of the foremost examples of how we’re reaping the benefits of this agentic revolution is our deployment of our new Employee Self-Service Agent across the company. Thanks […]

The post Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success appeared first on Inside Track Blog.

]]>

The case for AI in employee assistance

The advent of generative AI tools and agents has been a game changer for the modern workplace at Microsoft. And one of the foremost examples of how we’re reaping the benefits of this agentic revolution is our deployment of our new Employee Self-Service Agent across the company.

Thanks to the power of AI, agents, and Microsoft 365 Copilot, our employees—and workers everywhere—are discovering new ways to be more productive at their jobs every day. Recent research shows that knowledge workers are increasingly seeing big gains from using AI tools for work tasks. According to our Microsoft Work Trend Index:

As an AI-first Frontier Firm, Microsoft is at the leading edge of a transformation that’s bringing this technology into all aspects of our workplace operations. With tools like Microsoft 365 Copilot providing “intelligence on tap,” we’re forging a human-led, AI-operated work culture that enables our employees to accomplish more than ever before.

Bringing AI to employee assistance

As part of this move to embed AI across our enterprise, it was a natural step for us to apply this burgeoning technology to a common pain point for us and many workplaces today—employee assistance.

Workers in organizations large and small face many common issues in their day-to-day jobs. Whether it’s a problem with their device, a question about their benefits, or a facilities request, our typical employee was often forced to navigate a bewildering array of tools, apps, and systems in order to get help with each specific task.

This confusion is reflected in research showing that most workers are dissatisfied with existing employee-service solutions.

76% of employees find it difficult to quickly access company resources.
58% of employees struggle to locate regularly needed tools and services.

Our studies show that most employees have trouble finding the appropriate tools and resources they need to address their workplace-related questions.

Realizing that this was an ideal opportunity for AI, we set out to develop a state-of-the-art agentic solution. At Microsoft Digital, the company’s IT organization, we partnered with our product groups to develop and deploy the Employee Self-Service Agent, a “single pane of glass” that employees can turn to any time they need help. The product is now broadly available in general release.

A photo of D’Hers.

“With this employee self-service solution, we’re shaping a new era in worker support. With AI, every interaction is intuitive, every resource is within reach, and help feels seamless—creating an experience that empowers our people and accelerates business outcomes.”

Because Copilot is our “UI for AI,” the Employee Self-Service Agent is delivered as an agent in Microsoft 365 Copilot. If your employees have access to Copilot, you can deploy the agent at your company at no extra cost. If your employees don’t have a Copilot license, they can access it via Copilot Chat if it’s enabled by your IT administrator.

For the initial development and launch of our Employee Self-Service Agent, we decided to provide agentic help in three categories: Human resources, IT support, and campus services (real estate and facilities). Every organization will have to make its own determination for which functions to include in their implementation. Note that the agent is inherently flexible and expandable; we plan to add additional capabilities, such as finance and legal, in the future.

We learned many lessons in the almost year-long process of developing and implementing the Employee Self-Service Agent across our organization worldwide. The goal of this guide is to pass on what we learned—including how we used it to provide value to our employees and vendors—to help you prepare for, implement, and drive adoption of your own version of the agent.  

“With this employee self-service solution, we’re shaping a new era in worker support,” says Nathalie D’Hers, corporate vice president of Microsoft Employee Experience. “With AI, every interaction is intuitive, every resource is within reach, and help feels seamless—creating an experience that empowers our people and accelerates business outcomes.”

Before you start: Developing your plan

As you embark on your Employee Self-Service Agent journey, make sure to establish a clear and structured plan. This was a critical step for us in our deployment, and we can say with confidence that it will help you avoid surprises and increase your chances of a successful outcome.

Based on our experience here at Microsoft, the below is a high-level outline of the steps you should consider as you prepare for deploying your agent.

1. Define prerequisites
Start by making sure that all foundational elements for the agent are in place.

  • Assign licenses to your employees who will interact with the agent. They will need Microsoft 365 Copilot or Copilot Chat.
  • Verify readiness by configuring your Power Platform environments, applying Data Loss Prevention (DLP) policies, and setting up isolation (limited and controlled deployment with guardrails in place) where needed.
  • Ensure connectivity with critical systems by confirming that you have appropriate APIs and connectors available and functioning for the essential workplace systems that your organization uses (e.g., Workday, SAP SuccessFactors, and ServiceNow).

2. Identify your core team and responsibilities
Successful implementation of the Employee Self-Service Agent requires collaboration across multiple roles and departments in your organization.

  • Business owners from the areas your agent will cover—such as human resources and IT support—can help you define requirements, priorities, success criteria, and telemetry needs.
  • Platform administrators, particularly for Power Platform and tenant/identity teams, can manage your technical configuration.
  • Content owners and editors are needed to identify the knowledge sources to surface in the agent, curate new knowledge sources, and maintain the data underpinning these sources on an ongoing basis.
  • Subject matter experts can provide important “golden” prompt and user scenarios that the agent should prioritize and answer accurately.
  • Compliance, privacy, and security leaders and their teams are needed to address risk considerations.
  • Support professionals can help build a structure for live agent escalation and ticketing operations (in situations where the agent is unable to provide a solution).
  • Focus groups of end users assist with validating requirements and scenarios, as well as help with testing the agent.

3. Establish a clear timeline
We found that creating a schedule for the creation, implementation, and adoption of the agent is crucial. This phased approach will help you maintain momentum and accountability over the duration of the project.

For example, here’s a rough implementation timeline that you might use to gauge your progress:

Gantt chart showing 15-week timeline with assessment, deployment, pilot launch, and rollout phases.

4. Articulate your vision

Communicate your rollout plan to your team, including timelines and phases, and adjust it based on feedback. Establish clear goals and meaningful success metrics to guide you and make sure your efforts are in alignment with your company objectives. (Note: You may want to consider key upcoming projects or events in your organization and link the agent roadmap to them. This will help you meet your project’s success criteria faster and encourage quicker agent adoption.)

5. Define your governance

This phase will allow you to define policies and standards and conduct a thorough content audit to ensure accuracy, relevance, security, and sustainability.

6. Implement your agent

This phase involves configuration and integration, followed by testing.

7. Roll out the agent while driving adoption and measurement

We advise deploying the Employee Self-Service Agent using a phased, or ringed, approach. We started with a small group of employees, then gradually rolled it out to larger and larger groups  before finally releasing it to our entire organization.

We encouraged adoption with internal targeted communications and promotional efforts. Careful measurement enabled us to track impact and optimize agent performance. This type of concerted change management allowed us to share the latest product developments with our employees and to keep them excited and engaged with the tool.

By investing sufficient time and effort in the planning phase of your deployment, you’ll create a strong foundation for a secure, scalable, and successful self-service agent experience.

Chapter 1: Governance means getting your data right

When a Microsoft employee enters a query into an AI chat tool like Microsoft 365 Copilot, they know that they may not receive an individualized response that is directly specific to their situation. They are aware that they might need to verify the answer they receive with further research and additional sources.

But when it comes to our company-endorsed self-service agent, the stakes are different. Our employees expect to receive accurate and personally relevant responses when they ask for help. This is particularly true for queries related to important personal details, like HR-related questions about leave policies or benefits.

A photo of Ajmera.

“People expect personally tailored and highly accurate answers, especially for HR moments that really matter. We designed the Employee Self‑Service Agent with that expectation in mind, pairing trusted data and deep personalization with strong governance controls so that privacy, security, and trust are built into every interaction.”

Although the Employee Self-Service Agent comes pretrained with basic HR and IT support data, we found that the quality of the responses that our employees receive is directly connected to the accuracy, currency, and depth of the information we provide to the tool. You’ll want to spend the necessary time and effort to make sure that your data governance process is well thought-out and thorough, so that your employees experience the best possible results.

“Employee self‑service has a higher bar than generic AI tools,” says Prerna Ajmera, general manager of HR strategy and innovation. “People expect personally tailored and highly accurate answers, especially for HR moments that really matter. We designed the Employee Self‑Service Agent with that expectation in mind, pairing trusted data and deep personalization with strong governance controls so that privacy, security, and trust are built into every interaction.”

Major considerations for governance

We learned that before you configure your agent, you need to establish guardrails that protect your data’s integrity and that build your employees’ trust. These considerations will form the backbone of your governance framework:

  • Managing requirements: Define what the agent must deliver and align your stakeholders on clear, prioritized goals and objectives.
  • Determining and managing resources: Ensure you have the right people, systems, and funding in place to support your full product lifecycle.
  • Data security: Protect your sensitive employee information with strong controls, compliant storage, and least‑privilege access.
  • User access: Establish who can use, administer, and update your agent, with appropriate permissions and guardrails.
  • Change tracking: Monitor your updates to content, configurations, and workflows so your agent always reflects your current policies.
  • Reviewing: Regularly evaluate your content’s accuracy, the agent’s performance, and your organizational fitness to help you keep your employees’ experience with the agent trustworthy.
  • Auditing: Maintain traceability for compliance, incident investigation, and quality assurance across all of your data flows.
  • Deployment control: Manage where, when, and how you roll out new versions of the agent to reduce disruption and ensure consistency.
  • Rollback: Prepare a fast, safe path to reverting your changes if something breaks.

We found that addressing these considerations early in the process creates a governance structure that is proactive rather than reactive, increasing the quality of responses and setting your organization up for success.

Architecture essentials

Understanding the architecture of our agent helped our governance teams make informed decisions about our configuration and integration. To do that, they needed to review and understand its key architectural components. You’ll need to do the same.

Here’s a list of the different architecture components that our team assessed, to help you get started on your own process:   

  • Topics: Structured intents (e.g., “view paystub”) that align to employee questions and drive consistent answers.
  • Domain packages: Pre-curated bundles for different agent segments (like HR and IT support) that provide reusable patterns, prompts, and integrations.
  • Knowledge sources: Documents, intranet pages, FAQs, and databases that ground responses in authoritative content.
  • Connectors: Secure integrations to systems of record (like Workday or SAP SuccessFactors) can help enable read/write operations. (Because the Employee Self-Service Agent was built with Copilot Studio, it has access to more than 1,400 different connectors.)
  • Instructions: Governance-approved rules and prompts that shape tone, guardrails, and escalation behavior.

Assessing and preparing your content

A key early governance step is to audit all relevant content in your knowledge bases. This process should include assessing, updating, and, if necessary, restructuring this information before it is ingested by the agent.

An important caveat here is that the agent’s ability to understand which policies and procedures apply to which employee relies on your content having consistent metadata, permissions, and content structure. We found that before feeding your data into the agent, you need to:

  • Inventory existing content: Your content will incorporate many different types, such as SharePoint pages, Microsoft Teams posts, PDFs, intranet articles, and knowledge-base documents. The goal of the inventory process is to identify content that is complete rather than outdated, duplicative, or siloed; if there are issues with the content, they should be addressed before loading into the agent.
  • Assign knowledge owners: The owners should be SMEs who can help validate, tag, and maintain the content going forward. Part of this process is training up knowledge owners to be able to prepare and maintain content in ways that make it easily consumable by both agents and people.
  • Structure content for discoverability: All your content needs to have accurate metadata, well-defined topic pages, and consistent naming so that the agent can surface the right information at the right time.

We found that completing a thorough content audit helps us ensure that the Employee Self-Service Agent isn’t just chatting—it’s delivering trusted, up-to-date answers that save your workers time and effort as they go about their day.

Be aware of tone and conversational flow

Providing vetted and well-structured data to the agent is important, but it’s not the entire battle. You’ll also need to make sure your agent is given clear guidance on conversational tone and instructions on what to do in specific scenarios.

Make sure you incorporate:

  • Global instructions: Define the agent’s voice, behavior, and escalation rules to ensure consistency and trust. 
  • Topic-level triggers: Map natural language phrases to specific workflows (such as “reset password” or “check PTO”) so the agent routes these common queries correctly.
  • Advanced knowledge rules: Prioritize which data sources to use in ambiguous scenarios, and define when the agent should ask clarifying questions.

Taking these steps gave our agent a better chance of being accurate, helpful, and aligned with our organization’s specific preferences.

Addressing common scenarios with “golden” content

Another vital aspect of your content audit is identifying the most frequently accessed information in each topic area.

A good example comes from the preparation of our IT support content for ingestion by the Employee Self-Service Agent. One of the focuses of this effort was on so-called “golden prompts:” the 20 or so topics that generate up to 80 percent of our employee queries (a version of the famous “80/20 rule”).

Our golden prompts are a curated set of scenarios that:

  • Represent our critical user workflows and edge cases
  • Possess clear, expected responses (golden responses)
  • Cover core functionality that must never break

We made sure that the agent was providing high-quality responses for these common scenarios—we recommend you do the same.

Including “zero prompt” content

Another important aspect of your content process should be to develop “zero prompts.” These are preconfigured prompts in the agent that the user can simply click on to get an answer for a common issue or request.

For example, if one of your employees wants to understand how to set up a VPN, they simply click on the zero prompt provided for that topic. The tool then gives them complete instructions on how to set one up.

During our deployment of the agent, one case where we prepopulated the tool with content for a specific, high-demand scenario came when Microsoft made a major announcement regarding employees returning to the office. We knew this policy change would generate a lot of questions from our employees.

In preparation for this, we asked Microsoft 365 Copilot to create a single document that pulled in all the “return to office” material found in its verified HR content database. We then made this document available to the agent. Just by taking that simple step, we saw our user satisfaction ratings in the tool jump from 85 percent to 98 percent for that issue!

In your own deployment, think about what issues and topics generate the most questions from your employees. You can then prepare specific content to address these scenarios, which will increase your chances of success with the agent.

Data security and compliance

Data security was a high priority when we developed our agent, especially because it must necessarily access sensitive HR information on a regular basis. During product development, we made sure that the agent adhered to enterprise-grade security standards, including identity federation, least-privilege access, and encrypted storage.

Because the agent is built on Copilot Studio, it supports robust data-loss prevention features. The agent also complies with regulatory frameworks like General Data Protection Regulation through built-in auditing and data-retention policies.

One of the big advantages that an AI agent has over a static website or similar data source is the ability to personalize responses for each user. At the same time, we had to make sure that the agent had guardrails in place to avoid overexposing sensitive information. This included detailed disclaimers to help call out these kinds of responses and flag them for more careful handling.

Our agent complies fully with our accessibility standards as well. Like all Microsoft products and services, the tool underwent a rigorous review to ensure it was fully accessible for all users.

Responsible AI

Whenever a new AI application is launched, there may be concerns raised about potential challenges regarding bias, safety, and transparency. That’s why the Employee Self-Service Agent follows the Microsoft Responsible AI principles by default.

When you enable the sensitivity topic in your agent, it screens all responses for harassment, abuse, discrimination, unethical behavior, and other sensitive areas. We tested the agent thoroughly for objectionable responses before it was launched to a broad internal audience at Microsoft.

In addition, the agent includes an emotional intelligence (EQ) option. This feature is designed to make responses more empathetic, context-aware, and relevant for diverse user audiences. It analyzes the conversation’s context and tailors the agent’s replies to ensure that users feel understood and valued throughout their session (which could be particularly relevant for any conversations related to sensitive HR topics, such as family leave). The EQ option is customizable and can be turned off by your product admins.

Key takeaways

The following are important considerations for data governance when you deploy your Employee Self-Service Agent:

  • Employee expectations regarding accuracy and relevance are high for employee self-service tools, which makes data governance a key aspect of your deployment.
  • Consider which data repositories are best to incorporate into your agent, and make sure they are up-to-date and well-structured. This process requires a thorough content audit.
  • Pay special attention to the so-called “golden prompts” that make up a large percentage of expected queries. The agent’s answers to these questions should be top-notch.
  • Restructuring content can improve response quality. When we anticipated huge interest in a particular topic, such as workplace policy changes, we restructured our content on that subject and saw a significant jump in user satisfaction.
  • Build your agent to meet or exceed high standards for data security, privacy, and Responsible AI. These are vital concerns for any product that has access to sensitive personal information.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 2: Implementation with intention

Deploying a powerful and versatile tool like the Employee Self-Service Agent is no simple task. It requires guidance and buy-in from top leaders at the company, as well as detailed planning and execution across disparate parts of your organization. Here, we identify some of the key steps that we took here at Microsoft that can help guide you when launching your own self-service agent.

Determine category parameters

One of the first major decisions around implementing the agent is deciding which business function—we call them agent starters—to choose for your initial implementation.

We recommend starting with HR support or IT help (we started with HR). Both agent starters can be deployed into a single Employee Self-Service Agent experience, but they must be deployed one at a time. 

So you know, we’ve built the Employee Self-Service Agent to be connectable with other first- or third-party Copilot agents, enabling a seamless handoff to these agents without having to navigate to other tools or interfaces.

Understanding your deployment steps

There were four essential stages involved in the deployment of our agent, each with multiple steps. Here’s a quick rundown that you can use at your company:

  1. Preparation for deployment
    • Establish roles: Define who will manage, configure, and support the tool, assigning responsibilities to ensure accountability during deployment.
    • Set up your environment: Prepare the necessary hardware, operating system, and network configurations so the agent can run smoothly.
    • Set up third-party system integration: Ensure your infrastructure can securely connect and exchange data with external systems that the agent will need to integrate with.
  2. Installation
    • Install the agent: Deploy the core Employee Self-Service Agent software on the designated servers or endpoints.
    • Install accelerator packages: Add any desired connectors that enable the agent to communicate with commonly used systems for HR, payroll, IT support, etc.
  3. Customization
    • Configure the core agent: Adjust default settings to align with your organization’s policies and workflows.
    • Identify knowledge sources: Specify where the agent will pull information from, such as internal knowledge bases or FAQs.
    • Provide common questions and responses: Add employee FAQs to improve the agent’s ability to respond quickly and accurately.
    • Identify sensitive queries: Flag questions and responses that involve confidential or regulated information to ensure they’ll be handled securely.
  4. Publication
    • Approve the agent: Complete internal reviews and compliance checks to confirm the agent meets your organizational standards before full rollout.
    • Publish the agent: Make the configured agent available to your employees in your production environment.

Customization

The Employee Self-Service Agent operates as a custom agent within Copilot Studio, using our AI infrastructure via the Power Platform. The agent is constructed on a modular architecture that allows you to integrate it with your own enterprise data sources using APIs, prebuilt and custom connectors, and secure authentication mechanisms.

To streamline this integration process, we provide a library of prebuilt and custom connectors through both Copilot Studio and Power Platform. Preconfigured scenarios include connecting to major enterprise service providers such as Workday, SAP SuccessFactors, and ServiceNow. (View the full list of connectors offered by Copilot Studio.)

These connectors facilitate data exchange with the following systems and other agents in this ecosystem:

  • HR information systems
  • IT systems management
  • Identity management
  • Knowledge base platforms

We found that third-party integrations require setup effort and technical expertise across stakeholders in your tenant. Be sure to get buy-in and involve all relevant departments that will be impacted.

Rollout: A phased approach

As previously noted, we started our agent with HR content and then added IT support (we later expanded to include campus services help as well). We rolled the agent out to different groups of employees and geographic regions around the world over the course of months, adding new knowledge sources to the different categories at each step along the way. This gave us an opportunity to gather user data and refine performance of the tool as we went.

Graphic shows the phased rollout of the Employee Self-Service Agent to Microsoft employees in different regions of our global workforce.
We executed a phased rollout of the Employee Self-Service Agent across different regions and countries at Microsoft. As we expanded the audience for the tool, we also added more categories, knowledge sources, and capabilities.

Adding campus support services required us to handle queries and tasks related to dining, transportation, facilities, and similar subjects. This was a challenging addition, because the facilities and real estate space—unlike the HR and IT support areas—doesn’t have many large service providers, which are easier to provide prebuilt connectors for.

One area that did lend itself to prebuilt connectors, however, was facilities ticketing.

Because many of our campus facilities vendors use Microsoft Dynamics 365, we were able to create an out-of-the-box connector in the agent for their ticketing process. You can take advantage of these kinds of preconfigured tools in your deployment.  

Key takeaways

Here are some things to remember when implementing the Employee Self-Service Agent at your organization:

  • Decide which starter agent you will deploy first. We recommend starting with a single agent covering one area (vertical), such as HR or IT support, and then expanding from there.
  • Consider a phased rollout to allow time to refine responses and ramp up the number of topic areas and knowledge sources installed in your agent.
  • Use the prebuilt connectors to make it easier to integrate the agent with your existing systems.We developed customized connectors for major HR and IT service providers and a Microsoft 365 Dynamics connector to integrate with our many facilities vendors around the world.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 3: Driving adoption by breaking old habits

Once upon a time, when our employees needed help with a technical issue or an HR question, they literally picked up the phone and called the relevant internal phone number. That quickly evolved into an email-centered system, where employee questions were sent to a centralized inbox that would then generate a service request. Still later, chat-based help was introduced.

Using AI to handle employee questions and service requests is a natural step in this evolution, as large-language models were built to parse vast data repositories and return the right information (often with the help of multi-turn queries and responses). And by encouraging self-service, an AI agent can help meet employee needs faster while saving the organization’s staffing resources for other needs.

But getting employees to change their habits and use a tool like the Employee Self-Service Agent wasn’t going to be as easy as just flipping a switch. Here’s how we handled this important change management task at Microsoft.

Adoption across verticals

A key principle that we learned during the adoption process was that 80% of our change management activities for the agent are applicable to all our verticals (whether it be HR, IT support, campus facilities, or another category). We didn’t need to reinvent the wheel each time we added to the topics that the agent covered.

This allowed us to create a change management “playbook” that we could use each time we expanded to a new category. So, while roughly 20% of the strategies we used were specific to that vertical, the vast majority were the same, which saved time as we moved through onboarding the different categories.

Leadership is key

To get our employees to change the way they ask for help, we found it essential to get the support of our key leaders, something we refer to as “sponsorship.”

We found that good sponsorship doesn’t just come from your central product, communications, or marketing groups. It is equally vital to invest in relationships with local leadership in different regions as you roll out the agent (especially in multinational companies like ours).

Local leaders understand the various regional intricacies—including language, functionality, and the rhythm of the business—that can help inspire their segments of the workforce to adopt a new tool, and then evangelize it to others in turn. Working closely with these kinds of sponsors will help you pull off a successful adoption campaign.

If you have works councils, be sure to seek out your representatives and solicit their feedback on your agent experience early on. You can help them understand how the agent was developed and trained, then address any concerns they raise.

We’ve found that once our works councils are made aware of the careful processes we go through to protect user privacy, and to ensure compliance with our Responsible AI standards, they become enthusiastic supporters and can help promote agent adoption. (Read more about our experience with our works councils and the Microsoft 365 Copilot rollout.)

Defining your messaging

Work with your internal communications team to come up with a well-planned messaging framework for your agent rollout. Based on our experience, it’s likely you’ll need to communicate across a wide variety of teams and organizations like HR, IT, facilities, finance, and so on.

It’s important to be clear about how you’re positioning the product for your employees. This will allow you to develop both overall messaging for general use, but also content tailored to specific teams or employee roles. The more sophisticated your messaging, the more likely it is to be effective in encouraging user adoption of the agent in their regular workflow.

Listening to feedback

As Customer Zero for the company, our employees are our best testers and sources of feedback during our product development process. The Employee Self-Service Agent was no different, and we continue to gather crucial feedback and user data throughout the internal adoption process.

Because the agent is a tool centered on helping your workers resolve challenges and get quick answers to questions, you’ll want to set up your own systems for capturing their feedback and make sure the agent is meeting a high-quality bar.

We found that setting yourself up for success when it comes to listening to your employees involves two major aspects: Developing and deploying a system for gathering employee sentiment about the product, and then creating a system for analyzing that feedback and funneling the findings back to your IT team.

Some of the types of feedback and methods we used to gather it during the development process included:

  • User-testing data
  • User satisfaction ratings
  • User surveys, interviews and other research
  • Voice of the customer (in-product feedback)
  • Pilot projects and focus groups (smaller segments of users)
  • IT support incidents
  • Usage data and telemetry
  • Community-based early adopter feedback (similar to our Copilot Champs community)
  • Social media feedback and comments

You can choose from among these options to set up your own feedback mechanisms, or come up with something customized to your implementation.

Calibrating your usage goals

Remember that the Employee Self-Service Agent is not an all-purpose AI tool like Microsoft 365 Copilot, which your employees might use a dozen times a day. Instead, they may only need assistance from HR or IT support, tools, and information sources a few times a week (or even less). Your usage targets should be calibrated accordingly.

At the same time, the more categories of assistance you add to the agent, the more your usage levels can grow—along with user expectations.

When we decided to add campus support (dining, transportation, and facilities-related needs and queries), one of the motivators was to provide information that users might need on a more regular basis. This addition helped us increase adoption and build daily usage habits for the agent among our employees.

Making the agent your front door for employee assistance

Your employees may have longstanding habits around the ways that they seek assistance, such as moving quickly to email a service request, or immediately engaging a live support technician. There might even be someone helpful in the office next to them that they lean on for IT support. We’re aware that breaking such habits can be a challenge.

That’s why we decided to change our own employee-assistance workflows. In the case of HR, we are planning to remove the option to email a centralized alias for help, which was the default in the past. This forcing function will instead prompt our employees to turn to the agent first for assistance, creating a “front door” for all our HR service requests.

For our IT support function, we are switching from a Virtual Agent chatbot to the Employee Self-Service Agent, which should provide users with a richer experience and a higher rate of resolution.

Of course, our main goal is for the agent to handle an employee’s issue without having to seek further assistance. But what happens when the agent cannot resolve their problem or handle their request? That’s why we’ve also implemented a “smooth handoff”—either to create a service request or connect the user to a live agent for specialized assistance.

There are three key steps in this process:

  1. The Employee Self-Service Agent can identify when the user has reached a point where they need to move to a higher level of assistance via a live agent or a service request. (Note that we also allow the employee to make that determination for themselves.)
  2. We then give them different options for how they want to connect to live support.
  3. When the employee is transferred to a live technician, the Employee Self-Service Agent is able to pass on the chat history from its session with the user. That way, the technician or staff support can quickly get up to speed on the situation, see what the employee has already asked about and tried, and start helping them immediately.

Enabling the employee to quickly and smoothly transition to a higher level of support without leaving the chat increases user satisfaction and makes them more likely to return to the agent the next time they need assistance.

Strategic outreach to employees

Of course your workers, like ours, are busy with their day-to-day job functions. They may be resistant to trying a new tool or going through special training on how to access employee assistance. Or they may just not know about it.

Because of our regionally phased rollout of the agent, email was one of the most effective tools we used to connect with specific audiences and make them aware of the tool. With specific email lists, we could make sure that only employees in that phase of the rollout were seeing the message.

A key aspect of getting our employees to adopt any new tool is reinforcement—the process of sustaining behavior change by providing ongoing incentives, recognition, and support. Some of the reinforcement strategies we used for the agent included:

  • Targeted communications: Emails and organizational messages invited employees to try the agent as they received access
  • Multi-channel campaigns: Promotion of the agent via portals, newsletters, digital signage, and more to keep it at the forefront of employee minds
  • Training: Workshops and micro-learning sessions about the agent
  • Social campaigns: Posts highlighting the tool to increase awareness and gather employee feedback (see details below)
  • Leadership support: Managers modeled usage of the agent and promoted it regularly
  • Processes: The tool was part of regular employee workflows
An example of a fun Viva Engage post that our internal communications team created to encourage daily usage of the Employee Self-Service Agent during the holiday season.

One very important communications channel that we used in our adoption efforts was Microsoft Viva Engage. We set up a private Engage community for the Employee Self-Service Agent, then populated it with each new wave of users as they were given access to the tool (eventually all were given access when the tool went companywide).

We used this channel for various kinds of messaging:

  • General product awareness
  • Updates on new or changing functionality
  • Answering questions or addressing frustrations (two-way dialogue between users and the product team)
  • Fun and helpful “tips and tricks” that users could try (these could come from the product team, leadership, or individual product “champions”)

We also inserted messages about the new agent into our regular communications with different audiences, including HR professionals, IT support personnel, and internal comms staff at the company. And we regularly messaged company leaders about it, so they could encourage their teams and direct reports to support the effort and evangelize for the tool.

One thing we did was make clear to our employees that even though the agent was not able to handle an issue today, it might be able to in a month or two. That’s why ongoing communications to users was important.”

Prerna Ajmera, general manager, HR digital strategy and innovation

Of course, as a natural language chat tool, the Employee Self-Service Agent doesn’t require formalized training. The product itself is designed to guide users and allow them to experiment, simply by stating their needs in plain language. Most employees will already be familiar with AI tools like Microsoft 365 Copilot, so effectively using an AI-powered employee-assistance agent should be a low bar to clear.

Managing expectations

Your Employee Self-Service Agent rollout will be an ongoing journey as you add topic areas, functionalities, and other product features. Your product roadmap will evolve as you learn more about what your employees need with this kind of AI solution.

One factor to consider is how to set realistic user expectations about what the agent can do while the product matures and improves. As we gradually rolled out the tool, we messaged that the agent was in “early preview,” which helped avoid employee disappointment when it couldn’t handle a specific request.

“One thing we did was make clear to our employees that even though the agent was not able to handle an issue today, it might be able to in a month or two,” Ajmera says. “That’s why ongoing communications to users was important, as new capabilities were added and speed and accuracy improved.”

We also created messaging for early users indicating that their testing was an integral part of making the tool more effective. This created a positive feedback loop while also keeping employee expectations reasonable.

How we measured success

Carefully tracking and analyzing your success metrics throughout your development and release of the product is a high priority. Without this step, you are working in the dark.

At Microsoft, we identify the key performance indicators (KPIs) for a particular product and then use them as our North Star for any internal release. But the specifics of those KPIs can vary from product to product.

Graphic shows the improved success rates that employees have when seeking assistance from the Employee Self-Service Agent versus traditional support channels.
Early results from our internal deployment of the Employee Self-Service Agent showed marked increases in success rates when users sought assistance from an AI tool as compared with existing support channels.

For example, measuring the monthly average user (MAU) statistics might be extremely important for an all-purpose productivity tool like Microsoft 365 Copilot. But for an employee-assistance tool, the goal is not necessarily regular use, because employees aren’t constantly facing challenges that require help (we hope). Usage statistics may also be affected by certain events or cyclical needs, such as annual employee reviews or a major technology change (like a significant Windows update).

With this in mind, we identified certain key metrics for the Employee Self-Service Agent. In this case, the top KPIs included:

  • Percentage of support tickets deflected
  • Net satisfaction score
  • Latency period
  • Reliability
  • Total time savings
  • Total cost savings
  • Identified and prioritized issues (reported back to product group)

Overall, we focused on the rate at which employees were able to resolve issues without opening a support ticket, as this would likely generate the greatest return on time and cost savings. We came up with an overall target across the different verticals of 40% ticket deflection, and we’re making solid progress toward this goal as we continue to refine and improve the agent.

Part of our measurement process is a monthly progress meeting of key project stakeholders, where all KPIs are evaluated to see if our targets are being met. If the results do not meet expectations, we identify the potential causes and discuss what adjustments need to be made to address these shortfalls.

Key takeaways

Here are some key things to remember when it comes to adoption efforts for your Employee Self-Service Agent:

  • Don’t reinvent the wheel. Most of your change management and adoption strategies for the agent will be the same across different regions and help categories.
  • Line up product sponsors. Finding leaders and others across the organization to help you promote the Employee Self-Service Agent within their own groups, functions, and regions can make a big difference in gaining employee trust and encouraging adoption.
  • Set up proper listening channels. You’ll want to gather as much feedback as possible from your employees as you roll out the agent so you can understand what is working well and what needs improvement. This kind of feedback loop can also make your employees feel heard and help them shape the tool.
  • Make the shift to agent-first help. Employee habits for seeking assistance can be resistant to change. We decided that turning off the “email to create a service ticket” workflow was a great way to nudge our workers to recognize the agent as the first option for their assistance needs.
  • Be strategic in your communications. Use tools like email, Viva Engage, and other appropriate communications channels to target your communications and encourage a two-way conversation with employees about the agent. Sharing fun tips and encouraging peer support are other ways to increase awareness and engagement with product.
  • Identify your key metrics. We determined our benchmarks for success for this particular type of agent, then tracked them and made the results available to key stakeholders. This allowed us to measure the impact and effectiveness of the product.

Learn more

How we did it at Microsoft

Although some of the blog posts below are about adoption efforts related to Microsoft 365 Copilot, they can give you ideas on how we promote internal adoption of agentic AI products at Microsoft.

Further guidance for you

Begin your journey with the Employee Self-Service Agent

Agentic AI offers incredible promise to transform employee productivity, giving individuals access to powerful tools that enable them to accomplish more. We believe the Employee Self-Service Agent is another step along that path, allowing workers to get instant help with tasks that used to be cumbersome and time-consuming.

A photo of Fielder

“We’re excited to get the Employee Self-Service Agent out and into the hands of our customers, so that they can reap the same benefits that we’re already seeing from it. As we continue to refine the product and expand the number of verticals it can cover, we expect to realize exponential efficiency gains and capture even more cost savings across our entire organization.”

Now that you’ve read about our experience deploying the tool, it’s time to start your own journey. Successful implementation means your people will spend less time on the phone with support staff or hunting through web pages and other resources for help with routine employment tasks and more time devoted to their productive work, reducing job-related pain points and frustrations.

You can benefit from the lessons we’ve learned and the many helpful features and capabilities that we’ve built into this product, all of which are designed to make your implementation as fast, easy, and effective as possible.

“We’re excited to get the Employee Self-Service Agent out and into the hands of our customers, so that they can reap the same benefits that we’re already seeing from it,” says Brian Fielder, vice president of Microsoft Digital. “As we continue to refine the product and expand the number of verticals it can cover, we expect to realize exponential efficiency gains and capture even more cost savings across our entire organization.”

Key takeaways

Here are some of the essential top-level learnings we gleaned from our deployment of the Employee Self-Service Agent, which you should keep in mind as you start out on your own deployment path:

  • Identify and engage the right people. You’ll need buy-in and advocacy from leaders across the organization; the involvement of key stakeholders from HR, IT, legal, and compliance; and technical guidance from admins, license administrators, environment makers, and knowledge-base subject matter experts.
  • Develop your plan. Understand the major phases of governance, implementation, and adoption of the tool, and make sure that you have adequate resources and support for each phase.
  • Verify the quality of your content. Your chances of success will be better if you undertake a thorough content assessment to address the currency, accuracy, and structure of all relevant knowledge bases. Pay particular attention to the topics and tasks that are in greatest demand by employees when they access help services.
  • Consider a phased rollout. Releasing your Employee Self-Service Agent to progressively larger groups of workers across your organization allows you to gather data and feedback and improve the performance and relevance of the agent over time. You can also expand the number of categories that your agent covers as you go, increasing the impact and appeal of the tool.
  • Communicate strategically to promote adoption. Convincing employees to break longstanding habits when seeking help is a challenge. Email is helpful for targeting specific groups of employees, but be sure to use tools like Viva Engage to create community, answer questions, provide fun tips and tricks, and announce new capabilities and options.
  • Set clear goals and measure against them. Come up with a targeted set of KPIs that reflect your organization’s needs and aspirations, then develop a plan to capture data for each of these indicators and a regular reporting cadence to keep stakeholders informed of progress toward your goals.

Learn more

How we did it at Microsoft

Try it out

We’d like to hear from you!

The post Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success appeared first on Inside Track Blog.

]]>
22492