Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/ How Microsoft does IT Thu, 09 Apr 2026 16:34:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 137088546 Conditioning our unstructured data for AI at Microsoft http://approjects.co.za/?big=insidetrack/blog/conditioning-our-unstructured-data-for-ai-at-microsoft/ Thu, 09 Apr 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23020 Anyone who has ever stumbled across an old SharePoint site or outdated shared folder at work knows firsthand how quickly documentation can fall out of date and become inaccurate. Humans can usually spot the signs of outdated information and exclude it when answering a question or addressing a work topic. But what happens when there’s […]

The post Conditioning our unstructured data for AI at Microsoft appeared first on Inside Track Blog.

]]>
Anyone who has ever stumbled across an old SharePoint site or outdated shared folder at work knows firsthand how quickly documentation can fall out of date and become inaccurate.

Humans can usually spot the signs of outdated information and exclude it when answering a question or addressing a work topic. But what happens when there’s no human in the loop?

At Microsoft, we’ve embraced the power and speed of agentic solutions across the enterprise. This means we’re at the forefront of developing and implementing innovative tools like the Employee Self-Service Agent, a chat-based solution that uses AI to address thousands of IT support issues and human resources (HR) queries every month—queries that used to be handled by humans. Early results from the tool show great promise for increased efficiency and time savings.

In developing tools like this agent, we were confronted with a challenge: How do we make sure all the unstructured data the tool was trained on is relevant and reliable?

Many organizations are facing this daunting task in the age of AI. Unlike structured data, which is well organized and more easily ingested by AI tools, the sprawling and unverified nature of unstructured data poses some tricky problems for agentic tool development. Tackling this challenge is often referred to as data conditioning.

Read on to see how we at Microsoft Digital—the company’s IT organization—are handling data conditioning across the company, and how you can follow our lead in your own organization.

How AI has changed the game

We already fundamentally understand that the power of AI and large language models has changed the game for many work tasks. The way employee support functions is no exception to this sweeping change.

A photo of Finney.

“A tool like the Employee Self-Service Agent doesn’t know if something is true or false—it only sees information it can use and present. That’s why stale or outdated information is such a risk, unless you manage it up front.”

David Finney, director of IT Service Management, Microsoft Digital

Instead of relying on human agents to answer employee questions or resolve issues, we now have AI agents trained on vast corpora of data that can find the answer to a complex question in seconds.

But in our drive to give these tools access to everything they might need, they sometimes end up consuming information that isn’t helpful.

“A tool like the Employee Self-Service Agent doesn’t know if something is true or false—it only sees information it can use and present,” says David Finney, director of IT Service Management. “That’s why stale or outdated information is such a risk, unless you manage it up front.”

Before AI, support teams didn’t need to worry as much about the buried issues with unstructured content because a human could generally spot it or filter it out manually. After we turned these tools loose, they began reading everything, including:

  • Older or hidden SharePoint content that humans would never find—but AI can
  • Large knowledge base articles with buried incorrect information
  • Region-specific content that’s not properly labeled

“For example, humans never saw the old, decommissioned SharePoint sites because they were automatically redirected,” says Kevin Verdeck, a senior IT service operations engineer. “But AI definitely could find them, and it surfaced ancient information that we didn’t even know was still out there.”

Data governance is the key

A major part of the solution to this problem is better governance. We had to get a handle on our data.

A photo of Cherel.

“We needed to determine the owners of the sites and then establish processes for reviewing content, updating it, and defining how it should be structured. I would highly encourage that our customers think about governance first when they are launching their own AI tools, because everything flows from it.”

Olivier Cherel, senior business process manager, Microsoft Digital

The first step was a massive cleanup effort, including removing decommissioned SharePoint sites and deleting references to retired programs and policies. The next step was making sure all content had ownership assigned to establish who would be maintaining it. This was followed by setting up schedules for regular content updates (lifecycle management).

Governance was the first priority for IT content, according to Olivier Cherel, a senior business process manager in Microsoft Digital.

“We had no governance in place for all the SharePoint sites, which were managed by the various IT teams,” Cherel says. “We needed to determine the owners of the sites and then establish processes for reviewing content, updating it, and defining how it should be structured. I would highly encourage that our customers think about governance first when they are launching their own AI tools, because everything flows from it.”

Content governance was also a huge challenge for other support areas, such as human resources. A coordinated approach was needed.

“HR content is vast, distributed across multiple SharePoint sites, and not everything has a clear owner,” says Shipra Gupta, an engineering PM lead in Human Resources who worked on the Employee Self-Service Agent project. “So, we collaborated with our content and People Operations teams to create a true content strategy: one source of truth, no duplication, with clear ownership and lifecycle management.”

Cherel observes that this process forces teams to think about their support content in a totally different way.

“People realize they need a new function on their team: content management,” he says. “You can’t simply rely on the knowledge found in the technicians’ heads anymore.”

Adding structure to the unstructured data

The simple truth is that part of what makes unstructured data so difficult for agentic AI tools to deal with is that it’s disorganized.

A photo of Gupta.

“Our HR Web content already had tagging for many policy documents, which helped us get started. But it wasn’t consistent across all content, so improved tagging became a big part of our governance effort.”

Shipra Gupta, engineering PM lead, Human Resources

AI works best with content that has as many of the following characteristics as possible:

  • Document structure, including:
    • Clear headers and sections
    • Page-level summaries
    • Ordered steps and lists
    • Explicit labels for processes
    • HTML tags (which AI can see, but humans can’t)
  • Structured metadata, including:
    • Region codes (e.g., US-only policies)
    • Device-specific tags
    • Secure device classification
    • Country-based hardware procurement policies and HR rules

This kind of formatting and metadata allows the AI tool to more clearly parse and sort the information, meaning its answers are going to have a much higher accuracy level (even if it might be a little slower to return them).

“A good example here is tagging,” Gupta says. “Our HR Web content already had tagging for many policy documents, which helped us get started. But it wasn’t consistent across all content, so improved tagging became a big part of our governance effort.”

Be sure that as part of your content review, you’re setting aside the time and resources to add this kind of structure to your unstructured data. The investment will pay off in the long run.

Using AI to help condition data for use

As AI tools grow more sophisticated, we’re using them to directly work on AI-related challenges. This includes using AI on the challenge of unstructured data itself.

“Right now, these efforts are primarily human-led, but we are applying AI to, for example, help write knowledge base articles,” Cherel says. “Also, we’re starting to use AI to determine where we have content gaps, and to analyze the feedback we’re getting on the tool itself. If we just rely on humans, it’s not going to scale. We need to leverage AI to stay on top of things and keep improving the tools.”

Essentially, the future of such technology is all about using AI to improve itself.

“We’re looking at building an agent to help validate content,” Finney says. “We can use it to check for outdated references, old processes, or abandoned terms that are no longer used. Essentially, we’ll have AI do a readiness check on the content that it is consuming.”

Ultimately, the better the data is conditioned, the more accurate and relevant the agent’s responses will be. And that will make the end user—the truly important human in the loop—much happier with the final outcome.

Key takeaways

We’ve highlighted some insights to keep in mind as you consider how to condition your own organization’s data for ingestion by AI tools:

  • Unstructured data becomes a business risk when AI is in the loop. AI agents consume everything they can access, including outdated, hidden, or conflicting content, making data conditioning a critical prerequisite for agentic solutions.
  • AI highlights content issues that were previously invisible. Decommissioned SharePoint sites, outdated policies, and region-specific content without proper labels all became visible after AI agents began scanning across systems.
  • Governance is a vital part of the conditioning process. Assigning clear content ownership and establishing lifecycle management are essential steps in ensuring the content being fed to AI tools is of high quality and is well managed.
  • Adding structure to data dramatically improves AI accuracy. Clear document formatting, consistent tagging, and rich metadata help AI agents return more relevant, reliable answers.
  • AI will increasingly be used to condition and validate the data it consumes. Microsoft is already exploring using AI to identify content gaps, analyze feedback, and flag outdated information, creating a continuous improvement loop that can scale faster than human review alone.

The post Conditioning our unstructured data for AI at Microsoft appeared first on Inside Track Blog.

]]>
23020
Harnessing AI: How a data council is powering our unified data strategy at Microsoft http://approjects.co.za/?big=insidetrack/blog/harnessing-ai-how-a-data-council-is-powering-our-unified-data-strategy-at-microsoft/ Thu, 09 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23030 Information technology is an ever-evolving landscape. Artificial Intelligence is accelerating that evolution, providing employees with unprecedented access to information and insights. Data-driven decision making has never been more critical for businesses to achieve their goals. In light of this priority, we have established a Microsoft Digital Data Council to help accelerate our companywide AI-powered transformation. […]

The post Harnessing AI: How a data council is powering our unified data strategy at Microsoft appeared first on Inside Track Blog.

]]>
Information technology is an ever-evolving landscape. Artificial Intelligence is accelerating that evolution, providing employees with unprecedented access to information and insights. Data-driven decision making has never been more critical for businesses to achieve their goals.

In light of this priority, we have established a Microsoft Digital Data Council to help accelerate our companywide AI-powered transformation.

Our data council is a cross-functional team with representation from multiple domains within Microsoft, including Microsoft Digital, the company’s IT organization; Corporate, External, and Legal Affairs (CELA); and Finance.

A photo of Tripathi.

“By championing robust data governance, literacy, and responsible data practices, our data council is a crucial part of our AI-powered transformation. It turns enterprise data into a strategic capability that fuels predictive insights and intelligent outcomes across the organization.”

Naval Tripathi, principal engineering manager, Microsoft Digital

Our data council’s mission is to drive transformative business impact by establishing a cohesive data strategy across Microsoft Digital, empowering interconnected analytics and AI at scale. Our vision is to guide our organization toward Frontier Firm maturity through a clear blueprint for high-quality, reliable, AI-ready data delivered on trusted, scalable platforms.

“By championing robust data governance, literacy, and responsible data practices, our data council is a crucial part of our AI-powered transformation,” says Naval Tripathi, principal engineering manager in Microsoft Digital. “It turns enterprise data into a strategic capability that fuels predictive insights and intelligent outcomes across the organization.”

Enterprise IT maturity

This article is part of series on Enterprise IT maturity in the era of agents. We recommend reading all four of these articles to gain a comprehensive view of how your organization can transform with the help of AI and become a Frontier Firm.

  1. Becoming a Frontier Firm: Our IT playbook for the AI era
  2. Enterprise AI maturity in five steps: Our guide for IT leaders
  3. The agentic future: How we’re becoming an AI-first Frontier Firm at Microsoft
  4. AI at scale: How we’re transforming our enterprise IT operations at Microsoft (this story)

Our evolving data strategy

Over the past two decades, we at Microsoft—along with other large enterprises—have continuously evolved our data strategies in search of the right balance between control and agility. Early approaches were highly decentralized, with different teams owning and managing their own data assets. While this enabled local optimization, it also resulted in inconsistent quality and limited enterprise-wide insight.

Our subsequent shift toward centralized data platforms brought much-needed standardization, security, and scalability. However, as data platforms grew more sophisticated, ownership often drifted away from the business domains closest to the data, slowing responsiveness and diluting accountability.

Today, we and other leading companies are embracing a more balanced, federated approach, often described as a data mesh. Rather than forcing all our data into a single centralized system or allowing unchecked decentralization, the data mesh formalizes domain ownership while embedding governance, quality, and interoperability directly into shared platforms.

With this approach, our domain teams publish data as well-defined, discoverable products, while common standards for security, metadata, and compliance are enforced through automation rather than manual processes. This model preserves enterprise trust and consistency without sacrificing speed or autonomy.

By adopting a data mesh mindset, we can scale analytics and AI more effectively across the organization while still keeping ownership closely connected to the business focus. The result is a system that supports innovation at the edges, strong governance at the core, and seamless collaboration across domains, enabling the transformation of data from a technical asset to a strategic, enterprise-wide capability.

Quality, accessibility, and governance

To scale enterprise data and AI, organizations must first ensure their data is trusted, discoverable, and responsibly governed. At Microsoft Digital, our data strategy is designed to create data foundations that power intelligent applications and effective decision making across the company.

A photo of Uribe.

“High-quality, well-governed data is essential to accelerate implementation and adoption of AI tools. Data quality, accessibility, and governance are imperatives for AI systems to function effectively, and recognizing that is propelling our data strategy.”

Miguel Uribe, principal PM manager, Microsoft Digital

By implementing a data mesh strategy at scale, we aim to unlock valuable data insights and analytics, enabling advanced AI scenarios. Our data council focuses on three core dimensions that make AI-ready data possible:

  • Quality: Making sure enterprise data is reliable and complete
  • Accessibility: Enabling secure and discoverable access to data
  • Governance: Protecting and managing our data responsibly

Together, these dimensions form the foundation for scalable innovation and AI-powered data use. They connect data silos and ensure consistent, high‑quality access across the enterprise—enabling both humans and AI systems to work from the same trusted data foundation. As AI use cases mature, this foundation allows AI agents to retrieve and reason over data through enterprise endpoints, while supporting advanced analytics, data science, and broader technology.

“High-quality, well-governed data is essential to accelerate implementation and adoption of AI tools,” says Miguel Uribe, a principal PM manager in Microsoft Digital. “Data quality, accessibility, and governance are imperatives for AI systems to function effectively, and recognizing that is propelling our data strategy.”

Quality

AI-ready data is available, complete, accurate, and high-quality. By adopting this standard, our data scientists, engineers, and even our AI agents are better able to locate, process, and govern the information needed to drive our organization and maximize AI efficiencies.

By utilizing Microsoft Purview, our data council can oversee the monitoring of data attributes to ensure fidelity. It also monitors parameters to enforce standards for accuracy and completeness.

Accessibility

Ensuring that our employees get access to the information they need while prioritizing security is a foundational element of our enterprise data strategy. Microsoft Fabric allows us to unify our organization’s siloed data in a single “mesh” that enables advanced analytics, data science, data visualization and other connected scenarios.

Microsoft Purview then gives us the ability to democratize that data responsibly. By implementing a data mesh architecture, our employees can work confidently, unencumbered by siloed or inaccessible data, and with the assurance that the data they’re working with is secure.

A graphic shows how the data mesh architecture allows employees to access data they need, with platform services and data management zones surrounding this architecture.
The data mesh architecture enables our employees to do their work efficiently while preventing the data they’re working on from becoming siloed.

The data mesh connects and distributes data products across domains, enabling shared data access and compute while scaling beyond centralized architectures.

Platform services are standardized blueprints that embed security, interoperability, policies, standards, and core capabilities—providing guardrails that enable speed without fragmentation.

Data management zones provide centralized governance capabilities for policy enforcement, lineage, observability, compliance, and enterprise-wide trust.  

Governance

As organizations scale AI capabilities, strong governance becomes essential to ensure security, compliance, and ethical data use. Data governance—which includes establishing data policies, ensuring data privacy and security, and promoting ethical AI usage—is critical, as is compliance with General Data Protection Regulation (GDPR) and Consumer Data Protection Act (CDPA) regulations, among others.

However, governance is not only a technical capability; it’s also a cultural commitment.

Responsible data use must be embedded into the way teams manage data and build AI solutions. Through Microsoft Purview, we implemented an end-to-end governance framework that automates the discovery, classification, and protection of sensitive data across the enterprise data landscape.

This unified approach allows teams to innovate confidently, knowing that the data powering their insights and AI systems is trusted and protected, as well as responsibly managed.

“AI systems are only as reliable as the data that powers them,” Uribe says. “By investing in trusted and well-managed data, we accelerate not only the adoption of AI tools but our ability to generate meaningful insights and intelligent outcomes.”

The data catalog as the discovery layer

By serving as a common discovery layer for humans and AI, the data catalog ensures that governance translates directly into speed, accuracy, and trust at scale.

A unified data strategy only succeeds if both people and AI systems can consistently find the right data. At Microsoft, this is enabled by our enterprise data catalog, which operationalizes the standards set by our data council. 

For business users, the catalog provides intuitive search, ownership transparency, and trust signals—enabling confident self‑service analytics. For AI agents, the same catalog exposes machine‑readable metadata, allowing agents to programmatically discover canonical datasets, validate schema and freshness, and respect governance constraints.

Our role as Customer Zero

In Microsoft Digital, we operate as Customer Zero for the company’s enterprise solutions, so that our customers don’t have to.

That means we do more than adopt new products early. We deploy them at enterprise-scale, operate them under real‑world constraints, and hold them to the same standards our customers expect. The result is more resilient, ready‑to‑use solutions and a higher quality bar for every enterprise customer we serve.

A photo of Baccino.

“When we engage product teams with real telemetry from how data is created, governed, and consumed at scale, we move the conversation from theory to execution. That’s how enterprise readiness becomes real.”

Diego Baccino, principal software engineering manager, Microsoft Digital

Our data council embodies this Customer Zero mindset through its Enterprise Readiness initiative. By engaging product engineering as a unified enterprise voice, the council drives strategic conversations that surface operational blockers, influence roadmap prioritization, and ensure new and existing data solutions are truly ready for enterprise use.

These learnings are then shared broadly across Microsoft Digital to accelerate adoption, reduce duplication, and scale proven patterns across teams.

“When we engage product teams with real telemetry from how data is created, governed, and consumed at scale, we move the conversation from theory to execution,” says Diego Baccino, a principal software engineering manager in Microsoft Digital and a member of the council. “That’s how enterprise readiness becomes real.”

This work is deeply integrated with our AI Center of Excellence (CoE), where Customer Zero principles are applied to accelerate AI outcomes responsibly. Together, the AI CoE and the data council focus on improving data documentation and quality—foundational capabilities that are required to make AI feasible, trustworthy, and scalable across the enterprise.

By grounding AI innovation in measurable data quality and governance standards, Microsoft Digital ensures that experimentation can safely mature into production‑ready solutions. The partnership between our data council, our AI CoE, and our Responsible AI (RAI) Council is essential to our broader data and AI strategy.

“AI readiness isn’t aspirational—it’s operational,” Baccino says. “By measuring the health of our data, setting clear quality baselines, and using those signals to guide product and platform decisions, we turn data into a strategic asset and AI into a repeatable capability.”

Together, these teams exemplify what it means to be Customer Zero: Transforming enterprise experience into action, governance into acceleration, and data into durable competitive advantage.

Advancing our data culture

Our data council plays a pivotal role in advancing the organization transition from data literacy to enterprise data and AI capability. In conjunction with our AI CoE, it creates curricula and sponsors learning pathways, operational practices, and community programs to equip our employees with the skills and mindset required to thrive in a data- and AI-centric world.

While early efforts focused on improving data literacy, our data council ’s mission has evolved to enable data and AI capability at scale together with our AI CoE—where employees not only understand data but can effectively apply it to build, operate, and govern intelligent solutions.

“Our focus is not just teaching our teams about data. It is enabling employees to apply data to create AI-driven outcomes. When teams understand how data powers AI systems, they can make better decisions, design better products, and build more responsible AI experiences.”

Miguel Uribe, principal product manager, Microsoft Digital

Our curriculum includes high-level courses on data concepts, applications, and extensibility of AI tools like Microsoft 365 Copilot, as well as data products like Microsoft Purview and Microsoft Fabric.

By facilitating AI and data training, offering internally focused data and AI certifications, and internal community engagement, our council ensures that employees develop the capabilities required to responsibly build and operate AI-powered solutions. Achieving data and AI certifications not only promotes career development through improved data literacy, it also enhances the broader data-driven culture within our organization.

“We recognize that AI capability is built when data skills are applied directly to real AI scenarios and business outcomes—not when learning exists in isolation,” Uribe says. “Our focus is not just teaching our teams about data; it is enabling employees to apply data to create AI‑driven outcomes. When teams understand how data powers AI systems, they can make better decisions, design better products, and build more responsible AI experiences.”

Lessons learned

Our data council was created to develop and execute a cohesive data strategy across Microsoft Digital and to foster a strong data culture within our organization. Over time, several critical lessons have emerged.

Executive sponsorship enables transformation

Executive sponsorship is a key element to ensure implementation and adoption of a data strategy. Our leaders are committed to delivering and sustaining a robust data strategy and culture and have been effective champions of the council’s work.

“Leadership provides support and reinforcement of the council’s mission, as well as guidance and clarity related to diverse organizational priorities,” Baccino says.

Cross-functional collaboration accelerates impact

Our council’s work has also benefited from the diverse representation offered by different disciplines across our organization. Embracing diverse perspectives and understanding various organizational priorities is critical to implementing a successful data strategy and culture in a large and complex organization like Microsoft Digital.

Modern platforms allow for scalable AI productivity

Technology and architecture also play a critical role in enabling enterprise data and AI capability. Platforms like Microsoft Purview and Microsoft Fabric provide the governance, discovery, and analytics infrastructure required to create trusted, AI-ready data ecosystems.

Combined with strong leadership support and community engagement, these platforms allow our organization to move beyond isolated data projects toward connected, enterprise-wide intelligence.

As our organization continues to evolve, our data council’s strategic work and valuable insights will be crucial in shaping the future of data-driven decision making and AI transformation at Microsoft.

Key takeaways

Here are some things to keep in mind as you contemplate forming a data council to help you manage and scale AI impacts responsibly at your own organization:

  • A data mesh strikes the balance enterprises have been chasing. By formalizing domain ownership while enforcing standards through shared platforms, you avoid both chaotic decentralization and slow, over-centralized control.
  • Governance is an accelerator when it’s automated and embedded. Using platforms like Microsoft Purview and Microsoft Fabric, governance shifts from a manual gatekeeping function to a built‑in capability that enables faster, trusted analytics and AI.
  • AI systems are only as strong as their discovery layer. A unified enterprise data catalog allows both people and AI agents to find, trust, and use data consistently—turning standards into operational speed.
  • Customer Zero turns theory into enterprise‑ready execution. By operating its own data and AI platforms at scale, Microsoft Digital provides real telemetry and practical feedback that directly shapes product readiness.
  • Building AI capability is a cultural effort, not just a technical one. Our data council’s focus on applied learning, certification, and real-world AI scenarios ensures data skills translate into durable business outcomes.
  • AI scale exposes the cost of fragmented data ownership. A data council cuts through silos by aligning priorities, resolving tradeoffs, and concentrating investment on the data assets that matter most for AI impact.
  • Shared metrics create shared ownership. Publishing data quality and AI‑readiness scores at the leadership level reinforces accountability and positions data as a core enterprise asset.

The post Harnessing AI: How a data council is powering our unified data strategy at Microsoft appeared first on Inside Track Blog.

]]>
23030
Microsoft CISO advice: The importance of a written AI safety plan http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-the-importance-of-a-written-ai-safety-plan/ Thu, 09 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23016 Yonatan Zunger, CVP and Deputy CISO for Microsoft, has spent his career considering complex questions with security and privacy while building platform infrastructure and solutions. His experience underpins his advice on how to build a safety plan for working with AI. First and foremost, his advice is to have a written plan. “Make it an […]

The post Microsoft CISO advice: The importance of a written AI safety plan appeared first on Inside Track Blog.

]]>
Yonatan Zunger, CVP and Deputy CISO for Microsoft, has spent his career considering complex questions with security and privacy while building platform infrastructure and solutions. His experience underpins his advice on how to build a safety plan for working with AI. First and foremost, his advice is to have a written plan.

“Make it an expectation in your organization that people will create safety plans and have them for everything,” Zunger says. “People get so excited about having clarity in front of them that they end up making much more systematic, careful plans, and the rate of errors goes down dramatically.”

Watch this video to see Yonatan Zunger discuss his advice for creating an AI safety plan. (For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=H5reZ0uw0EA

Key takeaways

Here are questions and ideas to consider as you create a safety plan for your AI systems:

  • Define the problem. What problem are you trying to solve? A simple and clear problem statement is always a great starting point before building anything, including an AI agent.
  • Outline the solution. What is the basis of your solution? Can you explain your solution to an end user? What does a developer or administrative user of your solution need to know about what it is and does?
  • List the things that can go wrong. What can go wrong with your solution? Creating this list is the first step to figuring out how to deal with those issues.
  • Document your plan. What is your plan to address identified concerns? Identify the process you will follow when something goes wrong.
  • Draft your plan early and update it as your solution matures. Your safety plan can be as simple as a list or outline and should evolve as you prepare to build your solution.
  • Get feedback and buy-in. When you review the plan with stakeholders and leaders in your team and organization, you may uncover risks or issues you had not thought of. You also build awareness and agreement on what to do when something goes wrong.
  • Make a template and build its use into your processes. This tip is for anyone who leads a team or influences process development. Encourage using a safety template in all your projects to bring clarity and structure to how you work with AI.

The post Microsoft CISO advice: The importance of a written AI safety plan appeared first on Inside Track Blog.

]]>
23016
Olutunde Makinde: From Lagos to Redmond, a Microsoft IT engineer’s journey http://approjects.co.za/?big=insidetrack/blog/olutunde-makinde-from-lagos-to-redmond-a-microsoft-it-engineers-journey/ Thu, 02 Apr 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22855 A career in Microsoft Digital, the company’s internal IT organization, puts employees at the center of one of the world’s most complex and forward‑leaning enterprise environments. This is the team that runs Microsoft on Microsoft technology and services—maintaining more than a million computing devices, enabling global collaboration, and shaping the employee experience for more than […]

The post Olutunde Makinde: From Lagos to Redmond, a Microsoft IT engineer’s journey appeared first on Inside Track Blog.

]]>
A career in Microsoft Digital, the company’s internal IT organization, puts employees at the center of one of the world’s most complex and forward‑leaning enterprise environments. This is the team that runs Microsoft on Microsoft technology and services—maintaining more than a million computing devices, enabling global collaboration, and shaping the employee experience for more than 200,000 people.

To accomplish these huge tasks, it’s essential to cultivate a range of perspectives, expertise, and lived experiences.

Olutunde Makinde is an example of this.

A photo of Makinde.

“A friend once laughed at me back in college when I said I wanted to work at Microsoft, like it was impossible. But I knew I could achieve the impossible if I could just be focused. I never gave up.”

Olutunde Makinde, senior service engineer, Microsoft Digital

Makinde, a senior service engineer in Microsoft Digital, came to the company the long way around—roughly 7,000 miles away from the Redmond, Washington, headquarters, in fact. He’s originally from Lagos, Nigeria.

As a global organization, Microsoft builds teams where people with different experiences and life journeys actively influence how products, services, and internal platforms are designed. Makinde, commonly known around the office as “Tunde” (“rhymes with Sunday,” he notes), embodies that diverse approach, bringing his unique insights and experiences to critical work at the company.

“A friend once laughed at me back in college when I said I wanted to work at Microsoft, like it was impossible,” Makinde says. “But I knew I could achieve the impossible if I could just be focused. I never gave up.”

Launching an IT career in Nigeria

Makinde’s journey to Microsoft began with earning a degree in computer engineering in Lagos, after which he found work as a network engineer. He spent the next several years developing his skills through certifications and other learning opportunities.

“I did a lot of self-paced training, learning how to configure Cisco routers. Eventually I became a Cisco-certified network professional (CCNP),” Makinde says. “Around that time, I had a friend who was preparing for Windows Server 2008 certifications, and through his study materials I started learning more about Microsoft and its products.”

Makinde’s first direct encounter with Microsoft came in 2014, when the company he worked for received a contract to deploy the first Microsoft Azure cloud installation in Nigeria.  

“I spent the last day of 2014 and the first day of 2015 at the customer site, figuring out how to connect their on-premises network to Azure,” Makinde says. “It had never been done before in Nigeria, and taking up that challenge really propelled me into the world of Microsoft-specific technology.”

From there, Makinde set his sights on a career at Microsoft. He parlayed his initial exposure to cloud architecture into a focus on Azure, as well as Amazon Web Services. After spending some time in the United Kingdom, he achieved his goal when he was hired by the Microsoft Digital team in 2022. He moved to the United States in 2025.

He credits support from his family, especially his wife, with helping him achieve his dreams.

“My wife was a pillar of support through every career transition, from Nigeria to the UK to the United States,” Makinde says. “She believed in me when I faced rejections, celebrated with me when I finally got the offer, and now keeps me grounded whenever work gets intense. I couldn’t have made this journey without her.”

Making an impact from day one

Kathren Korsky, a principal technical program manager in Microsoft Digital and Makinde’s hiring manager, remembers the impression he made right away. It was clear that Makinde’s experience and technical background were major assets.

“What caught my attention was how well-prepared he was for the conversation and how well he communicated,” Korsky says. “The stories he shared about his work with Azure deployment in Nigeria really drew my interest. But I was also intrigued by how he was able to bridge technology with the business world, working with different banks across the continent to gather requirements, understand them, and build solutions.”

Upon being hired at Microsoft, he initially worked remotely from the UK on a Redmond-based device and application management team. The team was looking to deploy Cloud PC internally and needed a system in which employees could request access and get approvals to use Cloud PCs.

“He was able to stand up a full Power Automate workflow within a short period, and with a very high degree of quality,” Korsky says. “Rarely did anyone find any defects or bugs in his system.”

Makinde’s designs drove value moving forward as well, as the team made updates to his initial workflows.

A photo of Korsky

“His design was so strong that we were basically able to follow exactly what he had created in Power Platform and build that exact same design in ServiceNow. It really expedited that whole process.”

Kathren Korsky, principal technical program manager, Microsoft Digital

ServiceNow was more commonly used for systems that involved access requests and approvals, but when a platform update from Power Automate was initiated the team found Makinde’s original design was durable enough to weather the shift.

“His design was so strong that we were basically able to follow exactly what he had created in Power Platform and build that exact same design in ServiceNow,” Korsky says. “It really expedited that whole process.”

Driving efficiency and managing change

Since moving to the United States to work at company headquarters, Makinde has continued to push important projects forward—working with different stakeholders to deploy policy changes across Microsoft, managing the Change Advisory Board (CAB) intake process, and driving configuration updates for security and first-party product deployments.

“There’s a lot of diligence required to see the edge cases happening, to pay attention to them, and to watch out for potential problems. Tunde stops rollouts regularly to flag potential defects or risks, which prevents issues from interrupting our work and reducing productivity.”

Jeff Duncan, principal service engineering manager, Microsoft Digital

Makinde learned how to assess change requests and understand risk profiles, as well as enforce best practices for managing change within the security environment. Within about a year, he was able to take the lead in the space and own the deployment process.

A single misconfigured policy can cause major disruption. Makinde’s role puts him in position to be the checkpoint that prevents incidents before they happen.

“There’s a lot of diligence required to see the edge cases happening, to pay attention to them, and to watch out for potential problems,” says Jeff Duncan, principal service engineering manager in Microsoft Digital and Makinde’s manager. “Tunde stops rollouts regularly to flag potential defects or risks, which prevents issues from interrupting our work and reducing productivity.”

Softer skills like transparency, collaboration, and clear communication across levels and teams are key aspects of Makinde’s work as well.

“Tunde is thoughtful and detail-oriented, and he’s very good at explaining the decision-making process when he provides overviews for leadership,” Duncan says. “There’s rational, logical reasoning behind the decisions he makes.”

Makinde has implemented new efficiencies in how he manages the CAB and deployment service using AI. This includes CABBIE—an AI-powered agent that automates CAB communications. For Intune deployments, he uses AI to streamline deployment coordination and package reviews. These innovations reflect our Customer Zero approach to AI adoption here in Microsoft Digital.

“We run weekly CAB meetings to review change requests. That comes with a lot of communication work — status updates, follow-ups, coordination with stakeholders. It was all manual,” Makinde says. “CABBIE pulls the data from Azure DevOps, generates the emails, updates requests, and logs approvals automatically. It saves time and reduces errors.”

Success at Microsoft Digital: Aptitude and curiosity

As the organization at the center of the company’s own digital transformation, we in Microsoft Digital function as a living showcase of what’s possible with Microsoft technology. Our team tests new capabilities at enterprise scale as Customer Zero for Microsoft, identifying gaps and providing insights to ensure our customers benefit from what we’ve learned.

Because the impact of Microsoft Digital extends far beyond internal systems, team members have to set the standard for digital excellence. They must demonstrate what enterprise transformation looks like in practice and empower customers with the confidence to pursue their own modernization journeys.

 Hiring talented people like Makinde is essential to this mission.

“There are three core traits I look for when hiring—aptitude, attitude, and curiosity,” Korsky says. “Aptitude is not only what you currently know, but your propensity and desire to learn and grow those skills. Attitude goes hand in hand with that—are you willing to demonstrate grit and perseverance? And then curiosity, because so much of what we do from an innovation perspective requires a willingness to challenge assumptions and think of completely new ways of doing things.”

Makinde’s journey here at Microsoft Digital embodies and illustrates the company’s larger story: how technical expertise, innovative thinking, and a commitment to continuous learning combine to deliver world-class results.

“I’m now up to 25 certifications, and I continue to learn how to do more at Microsoft to positively impact the organization and protect our employees’ experience across applications and devices.”

Olutunde Makinde, senior service engineer, Microsoft Digital

That attitude of persistent curiosity and the willingness to keep learning continue to fuel Makinde’s experience at Microsoft. 

“Self-improvement is a way of life for me that has driven my career forward,” Makinde says. “At an early stage in my career, I did a lot of self-training—from learning how to configure Cisco routers and switches, to migrating on-premises workloads to Azure and managing cloud resources. I’m now up to 25 certifications, and I continue to learn how to do more at Microsoft to positively impact the organization and protect our employees’ experience across applications and devices.”

Key takeaways

Olutunde Makinde’s career experience here in Microsoft Digital offers some important insights that you can apply to your own organizational development:

  • AI adoption starts with practical problems. Makinde’s use of AI to streamline CAB communications and deployment coordination shows how Customer Zero teams find real-world applications for emerging technology.
  • Different experiences and perspectives contribute to business success. Achieving ambitious goals as an organization is dependent upon attracting talented people like Makinde from a range of backgrounds, disciplines, and lived experiences.
  • Strong technical skills paired with innovative thinking drives value. Makinde’s contributions to flexible cloud deployment workflows are an example of how this combination pays dividends.
  • Proactive risk management and attention to detail can prevent large-scale disruptions. By being willing to stop rollouts and flag risks before they become problems, Makinde’s approach to his work exemplifies how thoughtful decision-making safeguards productivity and security.
  • Persistence, curiosity, and continuous learning are critical career accelerators. Having a long and successful career at a company like Microsoft goes beyond just technical aptitude; it also requires perseverance and a passion for learning. Makinde’s self-driven training efforts and his refusal to give up have enabled him to achieve what once seemed impossible.

The post Olutunde Makinde: From Lagos to Redmond, a Microsoft IT engineer’s journey appeared first on Inside Track Blog.

]]>
22855
Microsoft CISO advice: The most important thing to know about securing AI http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-the-most-important-thing-to-know-about-securing-ai/ Thu, 02 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22863 Using AI comes with inherent risks. In a recent video, Yonatan Zunger, CVP and deputy CISO for Microsoft, suggests thinking about AI as a new intern will help you naturally take the right approach to AI security.  Zunger and his team focus on AI safety and security. They consider all the different ways anything involving […]

The post Microsoft CISO advice: The most important thing to know about securing AI appeared first on Inside Track Blog.

]]>
Using AI comes with inherent risks. In a recent video, Yonatan Zunger, CVP and deputy CISO for Microsoft, suggests thinking about AI as a new intern will help you naturally take the right approach to AI security. 

Zunger and his team focus on AI safety and security. They consider all the different ways anything involving working with AI can go wrong.

“An important thing to know about AI is that AI’s make mistakes,” Zunger says. “You already know how to work with systems that make mistakes, get tricked.”

Watch this video to see Yonatan Zunger discuss his advice for working with AI. (For a transcript, please view the video on YouTube: https://youtu.be/b1x6gDbSWVY. )

The post Microsoft CISO advice: The most important thing to know about securing AI appeared first on Inside Track Blog.

]]>
22863
Deploying Microsoft Baseline Security Mode at Microsoft: Our virtuous learning cycle http://approjects.co.za/?big=insidetrack/blog/deploying-microsoft-baseline-security-mode-at-microsoft-our-virtuous-learning-cycle/ Thu, 26 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22829 The enterprise security frontier isn’t just evolving. It’s accelerating beyond the limits of traditional security models. AI acceleration, cloud adoption, and rapid growth of enterprise apps have dramatically expanded the attack surface. Every new app introduces a new identity. Every identity carries permissions. Over time, those permissions accumulate, often without clear ownership or regular review. […]

The post Deploying Microsoft Baseline Security Mode at Microsoft: Our virtuous learning cycle appeared first on Inside Track Blog.

]]>
The enterprise security frontier isn’t just evolving. It’s accelerating beyond the limits of traditional security models.

AI acceleration, cloud adoption, and rapid growth of enterprise apps have dramatically expanded the attack surface. Every new app introduces a new identity. Every identity carries permissions. Over time, those permissions accumulate, often without clear ownership or regular review.

A photo of Ganti.

“An app is another form of identity. In a cloud-first, Zero Trust world, identity becomes the primary security perimeter, and access is governed by the principle of least privilege. Whether it is a user, an app, or an agent, when permissions are overly broad or elevated the blast radius expands dramatically, increasing risk exponentially.”

B. Ganti, principal architect, Microsoft Digital

Inside Microsoft Digital—the company’s IT organization—we recognized this early. Many of our highest‑risk security scenarios didn’t start with malware or phishing. They started with access. Specifically, apps running with permissions beyond what they required.

“An app is another form of identity,” says B. Ganti, principal architect in Microsoft Digital. “In a cloud-first, Zero Trust world, identity becomes the primary security perimeter, and access is governed by the principle of least privilege. Whether it is a user, an app, or an agent, when permissions are overly broad or elevated the blast radius expands dramatically, increasing risk exponentially.

Traditional security approaches such as periodic reviews, best‑practice guidance, and point‑in‑time hardening weren’t enough in an environment that changes daily. Configurations drift, new apps appear, and risk grows quietly in places that are hard to see at scale.

That reality forced a mindset shift internally here at Microsoft. Security couldn’t be optional. It couldn’t be advisory. And it couldn’t be static.

Our team operates one of the largest enterprise environments in the world, with tens of thousands of apps and a culture built on self‑service and autonomy. That scale drives innovation, but it also amplifies risk.

Our application identities became one of the most complex governance challenges we faced. Our ownership wasn’t always clear. Our permissions were often granted broadly to avoid disruption. And once approved, access rarely came under scrutiny again.

“As a self‑service organization, we empower people to move fast,” Ganti says. “But that also means apps get created, permissions get granted, and not everyone always remembers why.”

The rise of AI‑powered apps and agents—often requiring access to large volumes of data—increased our risk further.

Photo of Fielder

“We’re using Microsoft Baseline Security Mode to move security from guidance to enforcement. It establishes secure‑by‑default configurations that scale across our environment, so teams can innovate quickly without inheriting unnecessary risk.”

Brian Fielder, vice president, Microsoft Digital

We needed a system to reduce that risk systematically, not one app at a time.

Microsoft Baseline Security Mode (BSM) became that system—a prescriptive, enforceable baseline that defines what “secure” means and keeps it that way.

“We’re using Microsoft Baseline Security Mode to move security from guidance to enforcement,” says Brian Fielder, vice president of Microsoft Digital. “It establishes secure‑by‑default configurations that scale across our environment, so teams can innovate quickly without inheriting unnecessary risk.”

Defining Microsoft Baseline Security Mode

BSM is more than just a checklist of recommended settings. It’s an enforced security baseline built directly into the Microsoft 365 admin center, designed to reduce attack surface by default across core Microsoft 365 workloads.

It was developed and then deployed internally at Microsoft, with our team in Microsoft Digital serving as a close design and deployment partner throughout the process.

A photo of Wood.

“The settings in the Microsoft Baseline Security Mode were informed by years of experience in running our planet-scale services, and by analyzing historical security incidents across Microsoft to harden the security posture of tenants. The team identified concrete security settings that would prevent or significantly reduce known security vulnerabilities.”

Adriana Wood, principal product manager, Microsoft 365 security

At a technical level, BSM establishes a minimum required security posture by applying Microsoft‑managed policies and configuration states across services including Exchange Online, SharePoint Online, OneDrive, Teams, and Entra ID. The focus is on eliminating common misconfigurations, rather than theoretical or edge‑case risks.

“The settings in the Microsoft Baseline Security Mode were informed by years of experience in running our planet-scale services, and by analyzing historical security incidents across Microsoft to harden the security posture of tenants,” says Adriana Wood, a principal product manager for Microsoft 365 security. “The team identified concrete security settings that would prevent or significantly reduce known security vulnerabilities. The resulting mitigation controls were implemented and validated in Microsoft’s enterprise tenant, with Microsoft Digital evaluating operational impact, rollout characteristics, and failure modes before making it more broadly available to our customers.”

Legacy baselines rely on documentation and manual implementation. Administrators interpret guidance, apply settings where feasible, and revisit them periodically. In dynamic cloud environments, that model breaks down fast. Configurations drift, exceptions accumulate, and security degrades.

A photo of Bunge.

“Before enforcement, administrators can use reporting and simulation tools to understand how a baseline will affect users, apps, and workflows. That visibility allows teams to identify noncompliant assets, prioritize remediation by risk, and avoid unexpected disruptions.”

Keith Bunge, principal software engineer, Microsoft Digital

BSM replaces that approach with policy‑driven enforcement.

Now our controls are applied consistently across the tenant and continuously validated. When our configurations fall out of compliance, our risk surfaces immediately—it’s not discovered months later in an audit. The model is simple: get clean, stay clean.

Another key capability of BSM is impact awareness.

“Before enforcement, administrators can use reporting and simulation tools to understand how a baseline will affect users, apps, and workflows,” says Keith Bunge, a principal software engineer in Microsoft Digital. “That visibility allows teams to identify noncompliant assets, prioritize remediation by risk, and avoid unexpected disruptions. Our team in Microsoft Digital partnered closely with the product group to ensure these capabilities were practical for real enterprise deployments, not just greenfield environments.”

BSM is also not static.

The baseline evolves on a regular cadence to reflect changes in the threat landscape, new Microsoft 365 capabilities, and lessons learned from operating at scale.

From our perspective, BSM is not just a feature. It’s a security operating model. It shifts the default from “secure if configured correctly” to “secure by default.” Security decisions move out of individual teams and into a consistent, centrally enforced baseline. The question is no longer whether a control should be applied, but whether an exception is truly necessary—and how the associated risk will be mitigated.

That shift is what makes BSM sustainable at scale. And it’s why apps—where identities, permissions, and data access converge—became the next focus area for us in Microsoft Digital.

Addressing apps and high-risk surfaces

When we evaluated risk across our environment, one pattern was clear: Our apps represented both our most concentrated and least governed attack surface.

Apps are identities. They authenticate. They’re granted permissions. And unlike human users, they often operate continuously, without reassessment or visibility.

In a large, self‑service environment like ours, apps are created constantly by engineering teams, business groups, and automation workflows. Over time, many of those apps could accumulate permissions beyond what they actually needed, particularly within our Microsoft Graph. Our delegated permissions were especially risky, because they allow apps to act on our employees’ behalf at machine speed across massive data sets.

“As a user, I might not know where all my data lives,” Ganti says. “But an app with delegated permissions doesn’t have that limitation. It can search everything, everywhere, all at once.”

The challenge wasn’t just volume—it was inconsistency.

Our ownership was often unclear. Our permission reviews were infrequent or manual. And once we granted elevated access, we had few systemic controls in place requiring it to be revisited.

Microsoft Baseline Security Mode addresses this directly by treating apps explicitly as identities that must conform to least‑privilege principles.

We started with visibility. We inventoried apps and analyzed permission scopes, authentication models, and potential blast radius. Our apps with broad Microsoft Graph permissions, access to large volumes of unstructured data, or unclear ownership were prioritized. In some cases, we reduced permissions to more granular scopes. In others, we rearchitected apps to use delegated access more safely—or we retired them altogether.

This work was intentionally structured as a burndown, not a one‑time cleanup.

Removing our excess permissions was only half the equation. Preventing them from coming back was just as critical. BSM introduced guardrails earlier in the app lifecycle, to surface and control elevated permission requests before they reached production. New or updated apps requesting high‑risk permissions now trigger consistent review, and in many cases are blocked outright unless they meet strict criteria.

Moving from ‘get clean’ to ‘stay clean’

Reducing risk once is hard. Keeping it reduced is harder.

After our initial application burndown, we quickly learned that cleanup alone wouldn’t scale. Even as we reduced permissions and remediated high‑risk apps, new apps continued to appear. Existing apps evolved, teams changed, and without structural controls, the same risks would inevitably return.

BSM enabled us to shift from remediation to sustainability.

It started with visibility.

We needed a reliable way to detect when apps drifted out of compliance. That meant continuously monitoring permission changes, new consent grants, and scope expansions across our tenant. Instead of periodic reviews, we moved to continuous validation tied directly to the baseline.

Next came risk‑based prioritization.

Not every noncompliance carries equal impact. Our apps with broad Microsoft Graph permissions, access to large volumes of data, or unclear ownership were surfaced first. This ensured our security teams focused on material risk, rather than treating every deviation as equal.

It was equally important for us to control how new risk entered the system.

BSM introduces guardrails earlier in the application lifecycle. Our elevated permission requests are surfaced sooner and reviewed more consistently. In many cases, high‑risk permissions are blocked by default unless clear justification and mitigation are in place. Known‑bad patterns are stopped before our teams build or update apps.

Over time, this enforcement model fundamentally changed the operating posture.

Instead of recurring cleanup campaigns, we moved to continuous alignment. Our environment stays closer to the baseline by default. Our deviations are treated as exceptions that require explicit action, not silent drift.

This “stay clean” capability also reduced operational overhead.

As enforcement and validation moved into Microsoft Baseline Security Mode, we retired custom scripts, dashboards, and manual review processes that were difficult to maintain at scale. Our baseline became the source of truth for application security posture, not a snapshot taken after the fact.

Most importantly, we proved that BSM could scale.

“This isn’t limited to Microsoft 365. This is Microsoft, and it expands over time as more services come into scope.”

Jeff McDowell, principal program manager, OneDrive and SharePoint product group

By combining continuous validation, risk‑based prioritization, and enforced guardrails, we established a repeatable model for sustaining security improvements over time.

That model now serves as our foundation for extending BSM to additional workloads and security surfaces across the enterprise.

“This isn’t limited to Microsoft 365,” says Jeff McDowell, a principal program manager in the OneDrive and SharePoint product group. “This is Microsoft, and it expands over time as more services come into scope.”

Operationalizing Microsoft Baseline Security Mode

Defining a baseline is only the first step. Making it work day‑to‑day is the real challenge.

For us in Microsoft Digital, operationalizing BSM meant embedding it directly into how we run security. That required clear ownership, repeatable processes, and tight integration with our existing workflows.

Governance came first.

BSM creates a clear line between what is centrally enforced and what individual teams can influence. The baseline is owned and managed centrally to ensure consistency across the tenant. Our application owners and engineering teams still make design decisions, but within defined guardrails aligned to enterprise risk tolerance.

This clarity reduces friction.

Instead of debating security settings app by app, our teams start from a shared default. Our security conversations shift away from “Can we make an exception?” to “How do we meet the baseline with the least disruption?”

Operationally, BSM is integrated into our application lifecycle.

New apps are evaluated against baseline requirements early, before permissions are broadly granted or dependencies are established. Changes to existing apps, such as new permission requests or expanded scopes, are surfaced automatically and reviewed in context, rather than discovered months later during audits.

In an environment where apps are constantly being created, updated, and retired, automation is essential. Without policy‑driven enforcement, our security teams would be managing a perpetual backlog of reviews. BSM allows us to focus on true exceptions instead of revalidating the baseline itself.

That baseline is also embedded into our ongoing operations.

Our security posture is monitored continuously, not through periodic snapshots. When our configurations drift or new risks appear, we identify them early and address them while the blast radius is still small. Over time, this reduces both our operational effort and incident response overhead.

Perhaps our most important change was cultural.

BSM normalizes the idea that security defaults are foundational. Our teams still innovate and move quickly—but they do so in an environment where secure is expected, enforced, and sustained.

Embracing the feedback loop as Customer Zero

From the start, our team in Microsoft Digital deployed Microsoft Baseline Security Mode as Customer Zero: We applied early versions in our live, large‑scale enterprise environment, where we fed our real‑world learnings back to the product group. That feedback loop became central to how the platform evolved.

Running BSM at Microsoft scale quickly exposed challenges that don’t appear in smaller tenants. Visibility was one of the first. With thousands of apps and constantly changing permissions, it was difficult to pinpoint which apps violated least‑privilege principles and where security teams should focus first.

Those gaps directly shaped the product. Reporting and analytics were refined to better surface elevated permissions, risky scopes, and noncompliant apps, helping teams move from investigation to action more quickly.

Scalability was another critical lesson.

Controls that worked for dozens of apps didn’t automatically work for thousands. Our team needed policies that were opinionated, enforceable, and operationally sustainable without constant adjustment. That pushed BSM toward clearer defaults and stronger enforcement boundaries.

“What made the collaboration work is that Microsoft Digital was deploying this in a real tenant with real consequences,” Wood says. “Their feedback helped us understand what enterprises actually need to adopt these controls successfully, not just what looks good on paper.”

Over time, this became a virtuous cycle. Our team surfaced friction and risk through deployment. The product group translated those insights into product improvements. We then adopted those same improvements to replace custom tooling and manual processes.

For customers, this matters. The controls in BSM are shaped by operational reality, tested under scale and refined so other organizations don’t have to learn the same lessons the hard way.

What’s next for Microsoft Baseline Security Mode

Future iterations of BSM will expand coverage beyond traditional collaboration services to additional platforms and services, while maintaining the same opinionated approach. The goal is not to restrict environments indiscriminately, but to ensure new capabilities are introduced with security baked in from the start.

As compliance requirements grow more complex and more global, organizations need a consistent, defensible security baseline. BSM provides a Microsoft‑managed standard informed by real‑world attack patterns and enterprise deployment realities.

Controls evolve. Scope expands. Feedback loops remain active. As new risks emerge, the baseline adapts, without requiring organizations to redefine their security posture from scratch.

It’s a foundation designed to support whatever comes next.

Key takeaways

If you’re ready to strengthen your organization’s security posture with Microsoft Baseline Security Mode, consider these immediate actions:

  • Establish clear ownership. Assign responsibility for baseline security management to ensure consistency and accountability.
  • Implement repeatable processes. Develop standardized procedures to evaluate and enforce baseline requirements throughout the app lifecycle.
  • Integrate with existing workflows. Embed security controls into daily operations to reduce friction and streamline compliance.
  • Prioritize automation and monitoring. Use automated enforcementand continuous validation for early risk detection and response.
  • Foster a security-first culture. Normalize secure defaults and encourage teams to innovate within defined guardrails.
  • Design for evolution. Design your baseline to adapt as new services, platforms, and compliance needs arise.

The post Deploying Microsoft Baseline Security Mode at Microsoft: Our virtuous learning cycle appeared first on Inside Track Blog.

]]>
22829
Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft http://approjects.co.za/?big=insidetrack/blog/responsible-ai-why-it-matters-and-how-were-infusing-it-into-our-internal-ai-projects-at-microsoft/ Thu, 26 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=19289 Like the computer itself and electricity before it, AI is a transformational technology. It’s providing never-before-seen opportunities to reimagine productivity, address major social challenges, and democratize access to technology and knowledge. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic […]

The post Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft appeared first on Inside Track Blog.

]]>
Like the computer itself and electricity before it, AI is a transformational technology. It’s providing never-before-seen opportunities to reimagine productivity, address major social challenges, and democratize access to technology and knowledge.

As AI reshapes how we work and live, it brings with it both transformative potential and complex challenges. Across the industry, concerns about bias, safety, and transparency are growing.

At Microsoft, we believe that realizing AI’s benefits requires a shared commitment to responsibility—one we take seriously. As a result, we aren’t just creating AI solutions. We’re taking the lead on infusing responsible AI principles into our technology and organizational practices.

Prioritizing responsible AI across Microsoft

The most impressive AI-powered capabilities in the world mean nothing if people don’t trust the technology. Microsoft and many of our customers across all industries are working to strike the right balance between innovation and responsibility.

“We’re on a multi-year journey born out of the need to support innovation—and do it in a way that builds trust. Along the way, we’ve continued to iterate and evolve the program through a series of building blocks.”

Mike Jackson, head of AI Governance, Enablement, and Legal, Microsoft Office of Responsible AI

IT leaders and CXOs aren’t just deploying AI tools. They’re also thinking of the right guardrails to implement around those tools as their organizations mature. Meanwhile, developers and deployers want to be sure they’re building and implementing AI solutions within the bounds of responsibility.

As an organization that’s mapping the frontier of AI while creating business-ready tools for our customers, Microsoft is shaping the global conversation on responsible AI. We don’t only accomplish that through policy and governance, but also by embedding responsibility into the ways we build, deploy, and scale AI.

Laying the foundation for this work is the duty of our Office of Responsible AI (ORA). This team brings policy and governance expertise to the responsible AI ecosystem at Microsoft.

“We’re on a multi-year journey born out of the need to support innovation—and do it in a way that builds trust,” says Mike Jackson, head of AI Governance, Enablement, and Legal for the Office of Responsible AI. “Along the way, we’ve continued to iterate and evolve the program through a series of building blocks.”

ORA advances AI development, deployment, and secure and trustworthy innovation through governance, legal expertise, internal practice, public policy, and guidance on sensitive uses and emerging technology. The team focuses on empowering innovation while ensuring it falls within Microsoft’s governance, compliance, and policy guardrails.

ORA also partners closely with product and engineering teams as well as other trust domains like privacy, digital safety, security, and accessibility. The team created our Microsoft Responsible AI Standard, the cornerstone of our governance framework, and ensures internal AI initiatives align with it.

The Responsible AI Standard translates our six principles into actionable requirements for every AI project across Microsoft:

Fairness

AI systems should treat all people equitably. They should allocate opportunities, resources, and information in ways that are fair to the humans who use them.

Privacy and security

AI systems should be secure and respect privacy by design.

Reliability and safety

AI systems should perform reliably and safely, functioning well for people across different use conditions and contexts, including ones they weren’t originally intended for.

Inclusiveness

AI systems should empower and engage everyone, regardless of their background, striving to be inclusive of people of all abilities.

Transparency

AI systems should ensure people correctly understand their capabilities.

Accountability

People should be accountable for AI systems with oversight in place so humans can maintain accountability and remain in control.

ORA reports into the Microsoft Board of Directors and collaborates with stakeholders and teams across the company to operationalize these principles, implementing policies and practices that apply to AI applications. They determined that every AI initiative should undergo an impact assessment to ensure it aligns with the standard.

If ORA is our compass for responsible AI, our companywide Responsible AI Council has its hands on the steering wheel.

The council, led by Chief Technology Officer Kevin Scott and Vice Chair and President Brad Smith, was formed at the senior leadership level as a forum and source of representation across research, policy, and engineering. It provides leadership, strategic guidance, and executive support and sponsorship to advance strategic objectives around innovation and responsible AI.

A photo of Tripathi.

“ORA has established clear principles and a step-by-step assessment framework and tool. Our responsibility is to rigorously follow this process and ensure compliance across our products and initiatives.”

Naval Tripathi, principal engineering manager and co-lead, Microsoft Digital Responsible AI team

Under the council’s guidance, responsible AI CVPs, division leaders, and a network of responsible AI champions across the company operationalize the implementation of our Responsible AI Standard and compliance with our policies.

The structure of these teams is straightforward.

Every division has a designated CVP and division lead to steer the work and connect their team to the overarching Responsible AI Council. Within those divisions, each organization has a lead responsible AI champion or a set of co-leads to steer their team of champions. Those champions act as subject matter experts, reviewers for the impact assessment process, and points of contact for the teams developing AI initiatives.

Implementing AI governance within Microsoft IT

As members of the company’s IT organization, Microsoft Digital’s responsible AI division lead and champion team have a special role to play. They helped develop a critical internal workflow tool, which has now become a mandatory part of our responsible AI assessment process.

“The key is to ensure full alignment of responsible AI practices with ORA,” says Naval Tripathi, principal engineering manager and co-lead for Microsoft Digital’s Responsible AI Team. “ORA has established clear principles and a step-by-step assessment framework and tool. Our responsibility is to rigorously follow this process and ensure compliance across our products and initiatives.”

This tool logs every project, guides AI developers through initial impact assessments all the way to final reviews, and facilitates those workflows for champions.

A photo of Po.

“As organizations develop a diverse ecosystem of AI agents, often created by multiple engineering teams, it becomes essential to establish a standardized evaluation process. This ensures every agent adheres to enterprise-level standards before we deploy and distribute it to end users.”

Thomas Po, senior product manager, Microsoft Digital

By streamlining the process through a unified portal, the tool increases efficiency and minimizes errors that can arise from manual processes. It also encourages teams to make responsible AI part of the software development lifecycle (SDL) itself, not a hurdle or an afterthought.

“As organizations develop a diverse ecosystem of AI agents, often created by multiple engineering teams, it becomes essential to establish a standardized evaluation process,” says Thomas Po, a senior product manager working on Campus Services agents. “This ensures every agent adheres to enterprise-level standards before we deploy and distribute it to end users. That makes it more manageable in the long term, and having it all in one tool gives us more transparency.”

Our unified internal workflow looks like this:

  • Project initiation and system registration: During the design phase for an AI initiative, the engineering team accesses the portal and registers a new AI system. From there, they fill out fields with crucial information, including a title, description, the developer team’s division, whether the project will include internal or external resources, the relevant champion who should review their initiative, and other details. Within this initial form, different scenarios will trigger different review parameters and requirements, for example, when a team intends to publish a tool externally or engage with sensitive use cases.
  • Release assessment: After the system registration is complete, the team initiates the release assessment, a much more thorough review designed to ensure the AI-powered solution is ready to go live. At this point, the engineering team needs to provide detailed documentation. That includes the volume and kinds of data the system will use, potential harms and mitigations, and more. A release assessment includes experts in our Office of Responsible AI, Security, Privacy, and other teams, who review sensitive use cases or initiatives that include generative AI.

If the project clears all the requirements and reviews, it’s ready to go live. Crucially, we don’t think of these stages as a set of hurdles teams need to clear to complete their projects. Instead, the process guides engineering teams through the design elements they need to consider and provides opportunities for feedback from subject matter experts.

“The tool captures all the requirements from ORA and incorporates them into a developer-friendly workflow,” says Padmanabha Reddy Madhu, principal software engineer and responsible AI champion for Employee Productivity Engineering in Microsoft Digital. “It’s also a great way to pull AI champions into the design phase so we can support our colleagues’ work.”

With more than 80 AI projects currently underway across Microsoft Digital, logging and streamlining are essential. Teams are working on all kinds of ways to boost enterprise processes and employee experiences, like the following examples from Campus Services that users can access through our Employee Self-Service Agent:

  • A facilities agent helps employees take action when they discover an issue at one of our buildings, like a burnt-out light, a spill, or physical damage. The agent creates a ticket to alert a Facilities team so they can resolve it and allows the submitter to follow up on progress.
  • A campus event agent makes onsite gatherings like talks and Microsoft Garage build-a-thons more discoverable through simple queries. Using this agent, employees can more easily discover and plan around events that interest them, adding value to the in-person experience and incentivizing community.
  • A dining agent addresses the challenges of multiple on-campus restaurants featuring menu options that shift daily. Employees can use natural language queries like “Where can I get teriyaki today?” The agent does the rest. This kind of agent can be especially helpful for employees with allergies or dietary restrictions, providing a boost to accessibility for the on-campus dining experience.
A photo of Wu.

“AI is rapidly becoming a standard part of how we build and operate. As adoption accelerates, Responsible AI becomes imperative and enables teams to innovate at speed while maintaining safety and accountability at scale.”

Qingsu Wu, principal group product manager, Microsoft Digital

Our policies and practices have embedded a culture of responsibility and trust into our internal AI development processes. With that trust comes the confidence to experiment.

“AI is rapidly becoming a standard part of how we build and operate,” says Qingsu Wu, principal group product manager in Microsoft Digital. “As adoption accelerates, Responsible AI becomes imperative and enables teams to innovate at speed while maintaining safety and accountability at scale. By embedding Responsible AI into our engineering practices, teams have the clarity and confidence they need to manage risk proactively and deliver value without compromising safety or trust.”

Far from thinking of responsible AI assessments as an administrative or policy burden that creates additional work, teams now recognize their benefits. They look at the process as an extra set of eyes from a trusted partner. By minimizing legal and compliance risks through our Responsible AI Council’s expertise, our teams save time and stress, and we avoid problems like delayed releases or rollbacks.

A photo of Smith.

“What we’re doing is entirely novel in the tech world. Microsoft is really the lead learner here, and we have a passion for corporate citizenship that we’re embedding in our tools.”

Jamian Smith, principal product manager and co-lead, Microsoft Digital Responsible AI team, Microsoft Digital

Lessons learned: Embedding responsible AI into our development efforts

Throughout this process, we’ve learned lessons that will be helpful for other organizations just beginning their AI journeys:

  • We empowered early adopters and enthusiasts as responsible AI champions. They act as anchors and resources for developers who use AI, so we made sure they had the knowledge and training they needed to unlock downstream value.
  • Culture has been crucial to our success, especially our growth mindset and our focus on trust. Emphasizing these aspects of our company culture helped us embed responsible AI into core SDL processes and naturalize it on our engineering teams.
  • Processes are one thing, and tooling is another. If your responsible AI assessment workflow isn’t attuned to your needs, simply building a review portal tool won’t get you the rest of the way. First, we thought about the process we needed to put in place to solidify responsible AI practices and support our teams’ work. Then we built a tool that supports those workflows as easily and seamlessly as possible.
  • Accuracy is reliant on data, and data has a tendency to reflect the biases of the humans who organize it. It’s necessary to correct bias actively through introspection and testing.

“What we’re doing is entirely novel in the tech world,” says Jamian Smith, principal product manager and co-lead for Microsoft Digital’s Responsible AI team. “Microsoft is really the lead learner here, and we have a passion for corporate citizenship that we’re embedding in our tools.”

As your organization begins to experiment with its own AI projects, take these concrete steps to infuse responsibility into the solutions you create:

  1. Establish a strong foundation based on core principles and standards that align with your organizational culture. The Microsoft Responsible AI Standard is a great place to start because it reflects our experience and the expertise we’ve built as AI technology leaders and providers.
  2. Seek out the activators across your organization: people with a passion for AI, security, transparency, and other challenge areas, along with a willingness to learn and the ability to lead. Think about how to place them in both centralized and distributed positions.
  3. With the rapidly evolving regulatory climate around AI, it’s crucial to have a broad understanding of compliance and continue to follow its developments. Involve dedicated regulatory, compliance, and legal professionals in researching and monitoring global standards while communicating that information to your organization, particularly through training and updates that help teams adapt new regulations into their core processes.
  4. Create a process for responsible AI assessment. Consider ways to break it into stages that propel projects forward rather than hindering them. Enlist the right people to assess projects, and consider tooling that streamlines actions for both creators and assessors. Our AI Impact Assessment Guide can help you get started.
  5. Benefit from pioneers in the space, including our experts at Microsoft. Our journey has produced ready-to-use resources that can accelerate your progress. Examples include our Responsible AI Toolbox for GitHub, hands-on tools for building effective human-AI experiences, and our AI Impact Assessment Template.

“It’s not about how fast you can move, but how prepared you are. Responsible AI processes might seem like speed bumps, but ultimately they’re accelerators.”

Naval Tripathi, principal engineering manager and co-lead, Microsoft Digital Responsible AI Team

Building your capacity to create AI tools responsibly won’t happen without careful planning and strategy. As part of that process, embed responsible AI into your development workflows by emulating the practices we’ve pioneered at Microsoft.

“It’s not about how fast you can move, but how prepared you are,” Tripathi says. “Responsible AI processes might seem like speed bumps, but ultimately they’re accelerators.”

By prioritizing responsible AI, businesses of all kinds, all over the world, can ensure that the AI revolution is a truly human movement.

Key takeaways

These insights can help you as you begin your own journey through responsible AI:

  • Realize that this isn’t just a technical transition. It’s also a gradual evolution and an ongoing journey.
  • Work with people across your organization to establish goals and standards, because different disciplines bring different expertise and insights to the table. This will also align your responsible AI standards with your organizational values.
  • Start with the basics and build from there. Establish principles, create processes, and construct tooling around those structures.
  • A wide array of tooling is readily available in the world of AI. Seek out providers that model responsible values.
  • Lean on your existing experts across privacy, security, accountability, and compliance. Their skills will be crucial in this new technological landscape.
  • Conducting your own responsible AI groundwork is crucial, but you can also partner with Microsoft. We run on trust, and we’ve thought about these issues to pave the way for your success. Follow our lead, consider the best ways to adapt our lessons to your organization, and come to us with questions.

The post Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft appeared first on Inside Track Blog.

]]>
19289
Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI http://approjects.co.za/?big=insidetrack/blog/accelerating-transformation-how-were-reshaping-microsoft-with-continuous-improvement-and-ai/ Thu, 26 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20297 Technology companies are really people companies. In an age of rapidly advancing AI, losing sight of this reality leads to an overemphasis on new tools while neglecting opportunities for the transformational change that AI offers. Moving forward, the winners will be the companies that prioritize technological and operational excellence. Microsoft Digital, our company’s IT organization, […]

The post Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI appeared first on Inside Track Blog.

]]>
Technology companies are really people companies. In an age of rapidly advancing AI, losing sight of this reality leads to an overemphasis on new tools while neglecting opportunities for the transformational change that AI offers.

Moving forward, the winners will be the companies that prioritize technological and operational excellence. Microsoft Digital, our company’s IT organization, is seizing this moment by reinventing processes for agentic workflows powered by continuous improvement (CI).

We believe that AI-powered agents, Microsoft 365 Copilot, and human ambition are the key ingredients for unlocking opportunity across every industry.

A photo of Laves.

“Continuous improvement is a natural, formal extension of our culture that applies rigor, structure, and methodology to enacting a growth mindset through understanding waste and opportunities for optimization.”

David Laves, director of business programs, Microsoft Digital

By combining our AI capabilities with continuous improvement, we’re executing initiatives that increase our productivity and improve our performance. We’re forging a new path for how companies operate in the era of AI.

Welcome to the age of AI-empowered continuous improvement.

Our vision for continuous improvement, turbo-charged by AI

At Microsoft Digital, we’re embracing continuous improvement to unlock greater operational excellence and better employee experiences.

“One of the main tenets of our culture at Microsoft is a growth mindset, and that involves experimentation and curiosity,” says David Laves, director of business programs within Microsoft Digital. “Continuous improvement is a natural, formal extension of our culture that applies rigor, structure, and methodology to enacting a growth mindset through understanding waste and opportunities for optimization.”

Our capacity to drive process improvements has been crucial to our AI transformation as a company. We’ve adopted a “CI before AI” approach to ensure that we don’t end up automating inefficient processes. By engaging in activities that focus on continuous improvement, our teams can better identify which problems to address with AI and prioritize meeting customer needs.

“Continuous improvement is really about understanding your business, its needs, and where you can find value,” says Matt Hansen, a director of continuous improvement at Microsoft. “It gives us the language to scale our efforts out across everything we do.”

This process isn’t just another way to enable AI. In fact, AI is essential to enabling continuous improvement itself.

A photo of Campbell.

“When leaders stay actively engaged and partner through these Centers of Excellence, we can create alignment, accelerate decisions, and ensure both CI and AI help to deliver measurable business outcomes.”

Don Campbell, senior director, Microsoft Digital

Operationalizing continuous improvement and AI

Operationalizing continuous improvement and AI enablement is a leadership imperative at Microsoft, and one that doesn’t just happen organically. As an organization, we are deliberate about turning business strategy into measurable outcomes through clear sponsorship, disciplined prioritization, the right resourcing, and sustained investment in change management and employee skilling.

“The difference between strategy and real business impact is execution,” says Don Campbell, a senior director in Microsoft Digital. “That execution requires strong leadership sponsorship and clearly designed continuous improvement efforts and AI Centers of Excellence (CoEs), which translate business strategy into operational reality. When leaders stay actively engaged and partner through these CoEs, we can create alignment, accelerate decisions, and ensure both CI and AI help to deliver measurable business outcomes.”

To support leadership’s vision, we’ve put organizational resources in place to manage our continuous improvement investments, guide practices, and support teams. There’s an overarching continuous improvement CoE within Microsoft Digital, which works in close partnership with the AI CoEs, forming an integrated model which connects enterprise priorities with frontline execution.

Together, these CoEs establish shared standards, provide clarity on where to invest, and help us move faster with confidence, turning ambition into sustained business impact.

A photo of West.

“Continuous improvement is about process, but it’s also about people.”

Becky West, lead, Continuous Improvement Center of Excellence, Microsoft Digital

Continuous improvement and people

As we build out the organizational structures that underpin our investment in continuous improvement, we’re approaching the people side of change with intention.

Currently, we’re undertaking skilling efforts and communicating with every employee about how their role fits into core continuous improvement tools, including bowler cards, Gemba walks, Kaizen events, and monthly business reviews. We’re also demonstrating how “CI + AI” is a powerful combination.

The roadmap is there, the structure is in place, and we’re already seeing progress.

“Continuous improvement is about process, but it’s also about people,” says Becky West, lead for the Continuous Improvement CoE within Microsoft Digital. “A guiding hand like the Continuous Improvement CoE is how you make sure those two components align.”

Three Microsoft Digital continuous improvement initiatives

As we negotiate the early days of the company’s continuous improvement journey, Microsoft Digital is becoming a proving ground for the larger CI framework we want to deploy across the company. Our teams are spearheading projects to bring this framework to diverse functions like asset management, incident response (with a designated responsible individual), and third-party software licensing.

Enterprise IT asset management

Microsoft Digital’s Enterprise IT Asset Management team oversees the 1.6 million devices that power the company, from servers and IoT devices to labs, networks, and 800,000 employee endpoints. Safeguarding this vast landscape is critical to enterprise cybersecurity.

Three security pillars form the foundation of our security efforts: protect, detect, and respond. All of these depend on a complete, accurate device inventory.

Unified visibility enables proactive protection through enforced security controls, improves detection by spotting anomalies and misconfigurations, and accelerates responses by reducing investigation and remediation time. Without this foundation, security teams lack the precision to execute effectively.

To reach the goal of a unified inventory, the team initiated a continuous improvement initiative to build a consolidated source of truth for Microsoft Digital IT assets. Grounded in the principle of “progress over perfection,” the team initially narrowed its focus to Microsoft Lab Services (MLS) and IoT devices, with a vision to eventually expand to networks, employee devices, conference rooms, and printers. The ultimate goal is to move toward a truly comprehensive inventory.

This foundation will not only enhance security but also deliver enterprise-wide value through consistent policy enforcement, more resilient infrastructure, and comprehensive lifecycle management. By applying continuous improvement processes to help prioritize high-impact opportunities and using AI to accelerate outcomes, the program is enhancing Microsoft’s operational excellence and security posture.

“It’s better to do step A than wait until you’re ready to do steps A, B, C, and D,” says Aniruddha Das, a principal PM in Microsoft Digital.

As the team progressed from Gemba walks to Kaizen events under the guidance of the Continuous Improvement CoE, they dug deeper into areas of waste. Then they identified potential actions, breaking them down into “value-add,” “non-value-add-but-essential,” and “non-value-add.”

A photo of Ashwin Kaul

“For every action item, we were always asking ourselves how we could make these things better through AI. We’re looking for ways to expedite our core outcomes with minimal human involvement.”

Ashwin Kaul, senior product manager, Microsoft Digital

This exercise helped them prioritize their activities and land on a starting point: A device security index that would provide an overview of our hardware environment’s security posture. Essentially, it would represent a list of device security statuses.

The team identified distinct improvement areas for IoT and Microsoft Lab Services (MLS) devices. For IoT devices, they needed to build the inventory from the ground up. MLS already had a fairly complete inventory of devices, so the team set a goal to improve data quality. Although each of these challenges is different, they’re excellent opportunities for AI-empowered continuous improvement.

Now that the project is underway, the team plans to use an AI agent to automate device registration for IoT devices, which currently relies on manually uploaded spreadsheets. It’s a prime example how streamlining a process with continuous improvement enables AI to automate and accelerate our work.

On the MLS side, the team is creating an AI-driven normalization tool to automate the de-duplication and correction of inaccuracies in device data. The goal is to get from less than 50% data quality to 100%, dramatically improving our security posture through greater accuracy.

“For every action item, we’re always asking ourselves how we can make these things better through AI,” says Ashwin Kaul, a senior product manager within Microsoft Digital. “We’re looking for ways to expedite our core outcomes with minimal human involvement.”

Continuously improving the designated responsible individual experience

On the Digital Workspace team, designated responsible individuals (DRIs) are in charge of maintaining the health of our production systems. When technical emergencies arise, they’re the rapid-response point people who take the lead.

A photo of Ajeya Kumar

“We asked ourselves, ‘How can AI elevate the designated responsible individual (DRI) experience to the next level?’”

Ajeya Kumar, principal software engineer, Microsoft Digital

That process itself can be incredibly stressful, and time is of the essence. When every moment counts, efficiency is key. Meanwhile, a big part of a DRI’s work is just finding out what’s gone wrong so they can fix the incident.

But their job isn’t just about crisis management. When there are no active incidents, they work on engineering enhancements to improve the efficiency of production systems and clear backlog projects.

There’s also a handover process that takes place when one DRI finishes their rotation and another goes on-call. That involves a report about any incidents that have occurred, active issues, actions taken, key metrics, and other important information.

With these two priorities in mind, our Digital Workspace team initiated a continuous improvement process review. Their Gemba walk provided a crucial starting point.

“The planning stage is all about figuring out what the process is, what it should be, and what we can do to improve it,” says Ajeya Kumar, a principal software engineer on the Digital Workspace team within Microsoft Digital. “We asked ourselves, ‘How can AI elevate the designated responsible individual (DRI) experience to the next level?’”

Collectively, the team decided to tackle these challenges with a multifunctional AI agent they call the Smart DRI Agent. This agent’s primary role would be synthesizing and presenting information to its human counterparts to help them save time in context-heavy situations.

The AI elements that the team has planned can be broken out into the following capabilities:

  • Text summarization: Going through logs and identifying key insights.
  • Data correlation: Tracking and collating error logs.
  • Automation: Updating the status of issues, keeping abreast of communications, and providing point-in-time, daily, and weekly summaries of system health.
  • Identifying patterns: Building troubleshooting guides based on frequency patterns.

The Smart DRI Agent is already in its pilot phase and producing results. It conducts four main activities:

  • AI-generated summaries of DRI actions.
  • Proactive notifications with AI-generated insights.
  • Chat support to assist with all kinds of DRI queries.
  • AI-generated handover reports.

“The continuous improvement framework that enables these pieces is the key to unlocking value,” says Aizaz Mohammad, principal software engineering manager on the Digital Workspace team. “It may seem process-heavy, but once you work through it, you’ll see the value.”

That value is apparent in their results.

In the first 30 days of the Smart DRI Agent’s pilot, there were 301 incidents, and the agent provided insights on 101 of them. That led to an approximate 100 hours of time savings for DRIs and a 40% improvement in our key network performance metric.

Third-party software license audits

Within Microsoft Digital, the Tenant Integration and Management team is responsible for a range of services, including third-party software licensing. This space is all about managing liability from both a security operations and an auditing perspective.

A photo of Hovhannisyan.

“It takes a tremendous amount of data and traversals through multiple sources to get us to the actionable data we need. The goal for this project is to reduce that time to increase operational efficiencies.”

Anahit Hovhannisyan, principal group product manager, Microsoft Digital

Without the proper security insights, the company could find itself with risks associated with third-party software vulnerabilities. And without thorough auditing, we might experience license overuse and contractual issues that can lead to waste or expensive license reconciliations.

“It takes a tremendous amount of data and traversals through multiple sources to get us to the actionable data we need,” says Anahit Hovhannisyan, a principal group product manager within Microsoft Digital. “The goal for this project is to reduce that time to increase operational efficiencies.”

A photo of Kathren Korsky

“It’s tough to be honest about what isn’t working, because it ties into people’s personal value and worth, but it’s essential to the process.”

Kathren Korsky, team lead, Software Licensing, Microsoft Digital

The team decided to target the auditing process first. Currently, the software licensing team performs audits manually by looking at entitlements, contracts, purchase orders, and more while liaising with suppliers and our Compliance and Legal teams. That’s incredibly time-consuming.

During the software licensing team’s planning phase, they developed an ambitious goal of reducing the time to insights on third-party software license data from 154 days down to 15 minutes. During their continuous improvement Kaizen event, the team uncovered opportunities for AI-powered process improvements that eliminate waste.

“It required a lot of courage as we were identifying waste,” says Kathren Korsky, Software Licensing team lead within Microsoft Digital. “People are very invested. It’s tough to be honest about what isn’t working, because it ties into people’s personal value and worth, but it’s essential to the process.”

Now, they’re building and implementing solutions, including an AI and data platform that provides business intelligence with custom reporting abilities, an AI agent that provides audit support and ticket creation, and another that automatically generates audit reports. The team has been using Azure Foundry and Azure AI services to create their agents because these tools have the flexibility to switch between different models and fine-tune their parameters.

As these agents emerge, they’ll take the most tedious and error-prone aspects of the process out of human auditors’ hands, freeing them up to focus on solving problems, not endlessly searching for them.

Realizing continuous improvement at scale

These are just a small selection of the many continuous improvement initiatives underway within Microsoft Digital and the company as a whole.

“What continuous improvement gives us is the macro vision and the micro actions we can do to accomplish our goals.”

Kirkland Barret, senior principal PM manager, Microsoft Digital

At Microsoft, most of our continuous improvement initiatives are in their initial stages. As they progress through the measurement and adjustment phases, two benefits will emerge.

First, we’ll iterate and improve the value that each individual initiative provides. Second, we’ll continue to build our discipline and cultural maturity around a growth mindset we’re operationalizing through continuous improvement.

“What continuous improvement gives us is the macro vision and the micro actions we can do to accomplish our goals,” says Kirkland Barrett, senior principal PM manager for Employee Experience in Microsoft Digital. “It’s about knowing our objectives, identifying upstream root causes, and rippling them throughout a mechanism of progress.”

Key takeaways

These tips for implementing a continuous improvement framework come from our own experiences at Microsoft Digital:

  • Be inclusive: Have the right subject matter experts at the table from the start. Sponsors need to be present as well.
  • Cultivate maturity and transparency: Objective analysis about how things are going requires honesty.
  • Sponsorship matters: Make sure you have sponsorship at the highest levels. This is a cultural change, and leadership is the core of culture.
  • No half-measures: If you’re going to identify opportunities for continuous improvement, commit to having budget and resources in place.
  • Process, then technology: Focus on what you need to simplify processes first, then apply AI. This will keep you from automating waste and inefficiency into your operations.

The post Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI appeared first on Inside Track Blog.

]]>
20297
Mapping the Microsoft approach to accessibility in the world of AI http://approjects.co.za/?big=insidetrack/blog/mapping-the-microsoft-approach-to-accessibility-in-the-world-of-ai/ Thu, 19 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22756 More than 1 billion people worldwide have a disability, and 83 percent of people will experience a disability during their working age. As AI transforms how we build and experience technology, accessibility has to be built in from the start. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are […]

The post Mapping the Microsoft approach to accessibility in the world of AI appeared first on Inside Track Blog.

]]>
More than 1 billion people worldwide have a disability, and 83 percent of people will experience a disability during their working age.

As AI transforms how we build and experience technology, accessibility has to be built in from the start.

Designing with and for people with disabilities isn’t optional—it’s fundamental to building technology that works for everyone and to building trust at scale. And yet today, about96% of websites are still inaccessible.

At Microsoft, we’re committed to creating accessible products and services—designed with and for the disability community—that benefit everyone.

Our “shift left” approach to software production—which involves moving quality-assurance, testing, and accessibility checks to earlier in the development lifecycle—means that implementing assistive features and tools is a high priority for Microsoft, rather than a late-stage addition.

And with the rise in importance of AI tools and products, paying close attention to accessibility standards and building these key capabilities into game-changing tech like Microsoft 365 Copilot is a crucial part of our mission here in Microsoft Digital, the company’s IT organization.

A photo of Allen.

“After my accident, I became immediately reliant on accessible technology. Because I worked in tech, I could leverage accessibility features and assistive technologies to continue doing my job. It was literally a lifeline for me.”

Laurie Allen, accessibility technology evangelist, Microsoft

Evangelizing for accessibility

Laurie Allen is one person who knows first-hand the importance of accessibility in enterprise software. A little more than a decade ago, she experienced a spinal cord injury and became a quadriplegic.

Today, Allen works as an accessibility technology evangelist at Microsoft. Every day, she relies on assistive digital technologies to help her be successful in her role—which involves ensuring that our software products are accessible to everyone.

“After my accident, I became immediately reliant on accessible technology,” Allen says. “Because I worked in tech, I could leverage accessibility features and assistive technologies to continue doing my job. It was literally a lifeline for me during that transitionary phase, because my job was the one thing about my life that didn’t dramatically change as a result of the accident.”

The following graphic shows how widespread disability is around the globe: 

Shifting left for inclusivity

At Microsoft, our accessibility strategy includes such disability categories as mobility, vision, hearing, cognition, and learning—because accessibility empowers everyone.

A photo of Garg.

“We view accessibility as a quality of our software, not simply a feature. Like with security and privacy, we prioritize accessibility to ensure that people can effectively perceive and operate our products and services, delivering an inclusive experience for everyone.”

Ankur Garg, accessibility program manager, Microsoft Digital

We begin with the concept of “shift left,” which in this context means incorporating accessibility principles from the project’s outset, instead of waiting until a product is already built.

This strategy mirrors our approach in other key trust domains, such as security and privacy.

“We view accessibility as a quality of our software, not simply a feature,” says Ankur Garg, an accessibility program manager in Microsoft Digital. “Like with security and privacy, we prioritize accessibility to ensure that people can effectively perceive and operate our products and services, delivering an inclusive experience for everyone.”

Here in Microsoft Digital, that manifests as treating accessibility as a core requirement validated through rigorous internal testing of AI agents and embedding standards and inclusive design early in every tool’s development life cycle. We also use internal AI tools to streamline guidance and testing before expanding those practices across the company.  

Accessibility challenges in the age of AI

Technology is moving fast, especially with the advent of AI-powered tools. It’s easier than ever for companies and individuals to quickly generate and publish an app, website, or other digital product.

That means it’s also easier than ever before to create inaccessible software. It’s important to remember that much of the data that generative AI models have been trained on includes websites and apps that were built without considering accessibility guidelines.

A photo of Hirt.

“We want people with disabilities to be represented and see themselves in the technology we’re producing. We work with our AI models to make sure they have disability data in their training sets, so that the final product will reflect these values.”

Alli Hirt, director of accessibility engineering, Microsoft

As a result, we’ve found that many AI code-generation tools and models produce code that by default fail to meet Microsoft’s high standards for accessibility.

“We want people with disabilities to be represented and see themselves in the technology we’re producing,” says Alli Hirt, a director of accessibility engineering at Microsoft. “We work with our AI models to make sure they have disability data in their training sets, so that the final product will reflect these values.”

When we’re developing AI-driven products like Microsoft 365 Copilot, the tool must have comprehensive knowledge of different disabilities and be able to give appropriate, contextual help.

“Let’s say I tell Copilot, ‘I have a mobility disability; what software tools can I use?’” Allen says. “Copilot must recognize what a mobility disability is and identify which tools will support me. That’s the data representation we need in our AI models.”

Allen noted that sensitivity and bias are also big factors when creating these kinds of tools.

“Copilot should not respond with, ‘I’m sorry you have a disability,’” she says. “That’s the type of bias we’re working to train out of the models.”

Accessibility as a core commitment

When Satya Nadella became Microsoft CEO in 2014, he redirected the core mission of the company. The new vision was simple: To empower every person and every organization on the planet to achieve more. And accessibility is a core part of that mission.

“At Microsoft, accessibility is in our DNA. It’s who we are as a company.”

Laurie Allen, accessibility technology evangelist, Microsoft

Meeting global accessibility standards is our starting point. For example, the hub-and-spoke business model of the Accessibility Team helps ensure that accessibility is everyone’s responsibility.

The Microsoft Corporate, External, and Legal Affairs (CELA) group oversees accessibility across the company, helping products align with internationally recognized accessibility standards, such as Web Content Accessibility Guidelines (WCAG) and EN 301 549. These standards ensure that digital content, websites, and apps produced today are designed with accessibility in mind.

Understanding how products and services align to key accessibility standards and requirements is an important step in providing inclusive and accessible experiences.

“An organization’s accessibility program succeeds when it’s a priority at every level of the organization, starting with senior leadership,” Allen says. “At Microsoft, accessibility is in our DNA. It’s who we are as a company.”

Presenting content in a multimodal way

Here in Microsoft Digital, we embrace software products that provide our employees with a multimodal approach in presenting content. This means using more than one sense at the same time, like seeing, listening, reading, and speaking. This makes our products accessible to a diverse array of users, including people who learn and work in different ways. It lets our employees customize the way that works best for them.

“Seeing a visually impaired colleague demonstrate how he works—listening to a wiki being read at a speed that I could never follow—showed me exactly why accessibility is needed. It’s not just about being inclusive or compassionate; it’s a requirement for people to do their jobs.”

Eman Shaheen, principal PM lead, Microsoft Digital

For example, someone may not have a diagnosed disability, but they might be a better auditory learner than a visual learner.

This reflects what Eman Shaheen, a principal PM lead in Microsoft Digital, learned from a team member when observing how he used assistive technologies.

“Seeing a visually impaired colleague demonstrate how he works—listening to a wiki being read at a speed I couldn’t even follow—showed exactly why accessibility is needed,” Shaheen says. “It’s not just about being inclusive or compassionate; it’s a requirement for people to do their jobs.”

Here are some examples of multimodal accessibility capabilities offered by Microsoft 365 Copilot that are designed to support diverse user requirements:

Vision

  • Works with screen readers
  • Generates alt text for images
  • Suggests accessible layouts, textual contrast, and consistent structure in documents and slides

Hearing

  • Provides real-time meeting Q&A
  • Produces meeting recaps across multiple languages
  • Summarizes lengthy or fast-moving chats to aid comprehension

Cognitive and neurodivergent (ADHD, dyslexia, autism, executive function)

  • Simplifies complex language
  • Supplies task breakdowns and next-steps guidance
  • Offers tone assistance to help with understanding communication nuances

Mobility

  • Provides voice-driven productivity tools, such as speech to text creation
  • Reduces fine‑motor effort by automating lists, tables, and drafts
  • Supports meeting recordings to help compile notes and action items

Speech and communication

  • Drafts and rewrites content for users needing expressive support
  • Refines tone for clarity and empathy in written communication

Learning

  • Summarizes long content to reduce reading burden
  • Organizes notes into structured content

Mental health and fatigue

  • Assists with communication when cognitive energy is low
  • Provides adaptive communication assistance to help users express themselves confidently

How we demonstrate our accessibility vision

Here at Microsoft, we developed a strategic partnership with ServiceNow over the last five years. The two companies work together to accelerate digital transformation for our enterprise and government customers.

Through this partnership, we use the ServiceNow platform for internal helpdesk and ServiceDesk process automation, IT asset management, and integrated risk management.

A photo of Mazhar.

“The biggest shift happened once ServiceNow started feeling the same operational pain we felt. That’s when they began fixing accessibility issues proactively, which changed everything.”

Sherif Mazhar, principal product manager, Microsoft Digital

As part of this process, we uncovered 1,800 accessibility bugs (including 1,200 that were rated as high severity) in the platform—in our first assessment. By contrast, our most recent review found just 24 accessibility-related issues.

“The biggest shift happened once ServiceNow started feeling the same operational pain we felt,” says Sherif Mazhar, a principal product manager in Microsoft Digital, who oversees the company’s relationship with ServiceNow. “That’s when they began fixing accessibility issues proactively, which changed everything.”

The next major step for us is ensuring our ServiceNow platform updates aligns to WCAG 2.2 accessibility standards which will require reworking older versions of our products. However, doing this work helps us maintain momentum toward a world of more inclusive enterprise software in all lines of business and for all Microsoft customers.

What’s next in accessibility

Digital accessibility work is never done.

As new software and hardware are introduced, user needs and accessibility standards change and grow. At Microsoft, we are committed to making accessibility easier for everyone.

“Right now, we’re making sure every AI agent across Microsoft is tested with assistive technologies—like screen readers and keyboard navigation—to guarantee that the outputs are accessible and compliant,” Garg says.

This “shift left” mentality at Microsoft is ultimately about putting people first. It means that no one should have to wait for a late fix to be able to do their work, or simply to belong.

By embedding accessibility standards into product planning, instead of tacking it on as an afterthought just before (or even after) product launch, we’re helping ensure that these digital experiences will include everyone from day one.

“We may compete on products, especially in AI, but accessibility is a shared mission,” Allen says. “When the industry collaborates on inclusive technology, everyone wins.”

Key takeaways

Here are some tips to keep in mind as you consider your own accessibility strategy in a world of increasingly AI-driven technology:

  • Start with leadership. Championing accessibility from the C-suite signals that this is a top organizational priority.
  • Raise awareness with training. Set up employee learning opportunities regarding accessibility in AI tools and encourage everyone to take part.
  • Design with inclusivity in mind from day one (“shift left”). Incorporate accessibility from the beginning of the software creation process to make sure it isn’t lost in the shuffle of trying to ship a product on time.
  • Think inclusively. Run usability tests with people with lived experience
  • Treat accessibility as an ongoing practice. Digital accessibility work is never finished; document strategies and share your team’s learnings to keep improving iteratively as an organization.

The post Mapping the Microsoft approach to accessibility in the world of AI appeared first on Inside Track Blog.

]]>
22756
Microsoft CISO advice: Read our four tips for securing your network http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-read-our-four-tips-for-securing-your-network/ Thu, 19 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22779 Geoff Belknap, CVP and operating CISO for Core and Enterprise, shares four key practices your business can use to be prepared for managing network security incidents. Learn from our experience Network isolation (Secure Future Initiative) “Knowing where devices are, who owns them, and what they’re supposed to be doing is pretty important in the middle […]

The post Microsoft CISO advice: Read our four tips for securing your network appeared first on Inside Track Blog.

]]>
Geoff Belknap, CVP and operating CISO for Core and Enterprise, shares four key practices your business can use to be prepared for managing network security incidents.

“Knowing where devices are, who owns them, and what they’re supposed to be doing is pretty important in the middle of an incident,” Belknap says.

Watch this video to see Geoff Belknap discuss how we’re securing our network at Microsoft. (For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=nWPaaTHGE-M.)

Key takeaways

Here are best practices you can use to secure your network:

  • Build a complete inventory. Keep track of what your network devices are, who owns them, and what they do.
  • Capture robust telemetry. Make sure your operational teams have the tools they need to see and analyze access and authentication logs.
  • Use dynamic access control. Manage who can send packets on the corporate network by applying policies.
  • Deprecate old network assets. Cyberattackers know to look for older, unpatched network devices. You can reduce the attack surface by replacing older devices.

The post Microsoft CISO advice: Read our four tips for securing your network appeared first on Inside Track Blog.

]]>
22779
Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success http://approjects.co.za/?big=insidetrack/blog/deploying-the-employee-self-service-agent-our-blueprint-for-enterprise-scale-success/ Thu, 12 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22492 The case for AI in employee assistance The advent of generative AI tools and agents has been a game changer for the modern workplace at Microsoft. And one of the foremost examples of how we’re reaping the benefits of this agentic revolution is our deployment of our new Employee Self-Service Agent across the company. Thanks […]

The post Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success appeared first on Inside Track Blog.

]]>

The case for AI in employee assistance

The advent of generative AI tools and agents has been a game changer for the modern workplace at Microsoft. And one of the foremost examples of how we’re reaping the benefits of this agentic revolution is our deployment of our new Employee Self-Service Agent across the company.

Thanks to the power of AI, agents, and Microsoft 365 Copilot, our employees—and workers everywhere—are discovering new ways to be more productive at their jobs every day. Recent research shows that knowledge workers are increasingly seeing big gains from using AI tools for work tasks. According to our Microsoft Work Trend Index:

As an AI-first Frontier Firm, Microsoft is at the leading edge of a transformation that’s bringing this technology into all aspects of our workplace operations. With tools like Microsoft 365 Copilot providing “intelligence on tap,” we’re forging a human-led, AI-operated work culture that enables our employees to accomplish more than ever before.

Bringing AI to employee assistance

As part of this move to embed AI across our enterprise, it was a natural step for us to apply this burgeoning technology to a common pain point for us and many workplaces today—employee assistance.

Workers in organizations large and small face many common issues in their day-to-day jobs. Whether it’s a problem with their device, a question about their benefits, or a facilities request, our typical employee was often forced to navigate a bewildering array of tools, apps, and systems in order to get help with each specific task.

This confusion is reflected in research showing that most workers are dissatisfied with existing employee-service solutions.

76% of employees find it difficult to quickly access company resources.
58% of employees struggle to locate regularly needed tools and services.

Our studies show that most employees have trouble finding the appropriate tools and resources they need to address their workplace-related questions.

Realizing that this was an ideal opportunity for AI, we set out to develop a state-of-the-art agentic solution. At Microsoft Digital, the company’s IT organization, we partnered with our product groups to develop and deploy the Employee Self-Service Agent, a “single pane of glass” that employees can turn to any time they need help. The product is now broadly available in general release.

A photo of D’Hers.

“With this employee self-service solution, we’re shaping a new era in worker support. With AI, every interaction is intuitive, every resource is within reach, and help feels seamless—creating an experience that empowers our people and accelerates business outcomes.”

Because Copilot is our “UI for AI,” the Employee Self-Service Agent is delivered as an agent in Microsoft 365 Copilot. If your employees have access to Copilot, you can deploy the agent at your company at no extra cost. If your employees don’t have a Copilot license, they can access it via Copilot Chat if it’s enabled by your IT administrator.

For the initial development and launch of our Employee Self-Service Agent, we decided to provide agentic help in three categories: Human resources, IT support, and campus services (real estate and facilities). Every organization will have to make its own determination for which functions to include in their implementation. Note that the agent is inherently flexible and expandable; we plan to add additional capabilities, such as finance and legal, in the future.

We learned many lessons in the almost year-long process of developing and implementing the Employee Self-Service Agent across our organization worldwide. The goal of this guide is to pass on what we learned—including how we used it to provide value to our employees and vendors—to help you prepare for, implement, and drive adoption of your own version of the agent.  

“With this employee self-service solution, we’re shaping a new era in worker support,” says Nathalie D’Hers, corporate vice president of Microsoft Employee Experience. “With AI, every interaction is intuitive, every resource is within reach, and help feels seamless—creating an experience that empowers our people and accelerates business outcomes.”

Before you start: Developing your plan

As you embark on your Employee Self-Service Agent journey, make sure to establish a clear and structured plan. This was a critical step for us in our deployment, and we can say with confidence that it will help you avoid surprises and increase your chances of a successful outcome.

Based on our experience here at Microsoft, the below is a high-level outline of the steps you should consider as you prepare for deploying your agent.

1. Define prerequisites
Start by making sure that all foundational elements for the agent are in place.

  • Assign licenses to your employees who will interact with the agent. They will need Microsoft 365 Copilot or Copilot Chat.
  • Verify readiness by configuring your Power Platform environments, applying Data Loss Prevention (DLP) policies, and setting up isolation (limited and controlled deployment with guardrails in place) where needed.
  • Ensure connectivity with critical systems by confirming that you have appropriate APIs and connectors available and functioning for the essential workplace systems that your organization uses (e.g., Workday, SAP SuccessFactors, and ServiceNow).

2. Identify your core team and responsibilities
Successful implementation of the Employee Self-Service Agent requires collaboration across multiple roles and departments in your organization.

  • Business owners from the areas your agent will cover—such as human resources and IT support—can help you define requirements, priorities, success criteria, and telemetry needs.
  • Platform administrators, particularly for Power Platform and tenant/identity teams, can manage your technical configuration.
  • Content owners and editors are needed to identify the knowledge sources to surface in the agent, curate new knowledge sources, and maintain the data underpinning these sources on an ongoing basis.
  • Subject matter experts can provide important “golden” prompt and user scenarios that the agent should prioritize and answer accurately.
  • Compliance, privacy, and security leaders and their teams are needed to address risk considerations.
  • Support professionals can help build a structure for live agent escalation and ticketing operations (in situations where the agent is unable to provide a solution).
  • Focus groups of end users assist with validating requirements and scenarios, as well as help with testing the agent.

3. Establish a clear timeline
We found that creating a schedule for the creation, implementation, and adoption of the agent is crucial. This phased approach will help you maintain momentum and accountability over the duration of the project.

For example, here’s a rough implementation timeline that you might use to gauge your progress:

Gantt chart showing 15-week timeline with assessment, deployment, pilot launch, and rollout phases.

4. Articulate your vision

Communicate your rollout plan to your team, including timelines and phases, and adjust it based on feedback. Establish clear goals and meaningful success metrics to guide you and make sure your efforts are in alignment with your company objectives. (Note: You may want to consider key upcoming projects or events in your organization and link the agent roadmap to them. This will help you meet your project’s success criteria faster and encourage quicker agent adoption.)

5. Define your governance

This phase will allow you to define policies and standards and conduct a thorough content audit to ensure accuracy, relevance, security, and sustainability.

6. Implement your agent

This phase involves configuration and integration, followed by testing.

7. Roll out the agent while driving adoption and measurement

We advise deploying the Employee Self-Service Agent using a phased, or ringed, approach. We started with a small group of employees, then gradually rolled it out to larger and larger groups  before finally releasing it to our entire organization.

We encouraged adoption with internal targeted communications and promotional efforts. Careful measurement enabled us to track impact and optimize agent performance. This type of concerted change management allowed us to share the latest product developments with our employees and to keep them excited and engaged with the tool.

By investing sufficient time and effort in the planning phase of your deployment, you’ll create a strong foundation for a secure, scalable, and successful self-service agent experience.

Chapter 1: Governance means getting your data right

When a Microsoft employee enters a query into an AI chat tool like Microsoft 365 Copilot, they know that they may not receive an individualized response that is directly specific to their situation. They are aware that they might need to verify the answer they receive with further research and additional sources.

But when it comes to our company-endorsed self-service agent, the stakes are different. Our employees expect to receive accurate and personally relevant responses when they ask for help. This is particularly true for queries related to important personal details, like HR-related questions about leave policies or benefits.

A photo of Ajmera.

“People expect personally tailored and highly accurate answers, especially for HR moments that really matter. We designed the Employee Self‑Service Agent with that expectation in mind, pairing trusted data and deep personalization with strong governance controls so that privacy, security, and trust are built into every interaction.”

Although the Employee Self-Service Agent comes pretrained with basic HR and IT support data, we found that the quality of the responses that our employees receive is directly connected to the accuracy, currency, and depth of the information we provide to the tool. You’ll want to spend the necessary time and effort to make sure that your data governance process is well thought-out and thorough, so that your employees experience the best possible results.

“Employee self‑service has a higher bar than generic AI tools,” says Prerna Ajmera, general manager of HR strategy and innovation. “People expect personally tailored and highly accurate answers, especially for HR moments that really matter. We designed the Employee Self‑Service Agent with that expectation in mind, pairing trusted data and deep personalization with strong governance controls so that privacy, security, and trust are built into every interaction.”

Major considerations for governance

We learned that before you configure your agent, you need to establish guardrails that protect your data’s integrity and that build your employees’ trust. These considerations will form the backbone of your governance framework:

  • Managing requirements: Define what the agent must deliver and align your stakeholders on clear, prioritized goals and objectives.
  • Determining and managing resources: Ensure you have the right people, systems, and funding in place to support your full product lifecycle.
  • Data security: Protect your sensitive employee information with strong controls, compliant storage, and least‑privilege access.
  • User access: Establish who can use, administer, and update your agent, with appropriate permissions and guardrails.
  • Change tracking: Monitor your updates to content, configurations, and workflows so your agent always reflects your current policies.
  • Reviewing: Regularly evaluate your content’s accuracy, the agent’s performance, and your organizational fitness to help you keep your employees’ experience with the agent trustworthy.
  • Auditing: Maintain traceability for compliance, incident investigation, and quality assurance across all of your data flows.
  • Deployment control: Manage where, when, and how you roll out new versions of the agent to reduce disruption and ensure consistency.
  • Rollback: Prepare a fast, safe path to reverting your changes if something breaks.

We found that addressing these considerations early in the process creates a governance structure that is proactive rather than reactive, increasing the quality of responses and setting your organization up for success.

Architecture essentials

Understanding the architecture of our agent helped our governance teams make informed decisions about our configuration and integration. To do that, they needed to review and understand its key architectural components. You’ll need to do the same.

Here’s a list of the different architecture components that our team assessed, to help you get started on your own process:   

  • Topics: Structured intents (e.g., “view paystub”) that align to employee questions and drive consistent answers.
  • Domain packages: Pre-curated bundles for different agent segments (like HR and IT support) that provide reusable patterns, prompts, and integrations.
  • Knowledge sources: Documents, intranet pages, FAQs, and databases that ground responses in authoritative content.
  • Connectors: Secure integrations to systems of record (like Workday or SAP SuccessFactors) can help enable read/write operations. (Because the Employee Self-Service Agent was built with Copilot Studio, it has access to more than 1,400 different connectors.)
  • Instructions: Governance-approved rules and prompts that shape tone, guardrails, and escalation behavior.

Assessing and preparing your content

A key early governance step is to audit all relevant content in your knowledge bases. This process should include assessing, updating, and, if necessary, restructuring this information before it is ingested by the agent.

An important caveat here is that the agent’s ability to understand which policies and procedures apply to which employee relies on your content having consistent metadata, permissions, and content structure. We found that before feeding your data into the agent, you need to:

  • Inventory existing content: Your content will incorporate many different types, such as SharePoint pages, Microsoft Teams posts, PDFs, intranet articles, and knowledge-base documents. The goal of the inventory process is to identify content that is complete rather than outdated, duplicative, or siloed; if there are issues with the content, they should be addressed before loading into the agent.
  • Assign knowledge owners: The owners should be SMEs who can help validate, tag, and maintain the content going forward. Part of this process is training up knowledge owners to be able to prepare and maintain content in ways that make it easily consumable by both agents and people.
  • Structure content for discoverability: All your content needs to have accurate metadata, well-defined topic pages, and consistent naming so that the agent can surface the right information at the right time.

We found that completing a thorough content audit helps us ensure that the Employee Self-Service Agent isn’t just chatting—it’s delivering trusted, up-to-date answers that save your workers time and effort as they go about their day.

Be aware of tone and conversational flow

Providing vetted and well-structured data to the agent is important, but it’s not the entire battle. You’ll also need to make sure your agent is given clear guidance on conversational tone and instructions on what to do in specific scenarios.

Make sure you incorporate:

  • Global instructions: Define the agent’s voice, behavior, and escalation rules to ensure consistency and trust. 
  • Topic-level triggers: Map natural language phrases to specific workflows (such as “reset password” or “check PTO”) so the agent routes these common queries correctly.
  • Advanced knowledge rules: Prioritize which data sources to use in ambiguous scenarios, and define when the agent should ask clarifying questions.

Taking these steps gave our agent a better chance of being accurate, helpful, and aligned with our organization’s specific preferences.

Addressing common scenarios with “golden” content

Another vital aspect of your content audit is identifying the most frequently accessed information in each topic area.

A good example comes from the preparation of our IT support content for ingestion by the Employee Self-Service Agent. One of the focuses of this effort was on so-called “golden prompts:” the 20 or so topics that generate up to 80 percent of our employee queries (a version of the famous “80/20 rule”).

Our golden prompts are a curated set of scenarios that:

  • Represent our critical user workflows and edge cases
  • Possess clear, expected responses (golden responses)
  • Cover core functionality that must never break

We made sure that the agent was providing high-quality responses for these common scenarios—we recommend you do the same.

Including “zero prompt” content

Another important aspect of your content process should be to develop “zero prompts.” These are preconfigured prompts in the agent that the user can simply click on to get an answer for a common issue or request.

For example, if one of your employees wants to understand how to set up a VPN, they simply click on the zero prompt provided for that topic. The tool then gives them complete instructions on how to set one up.

During our deployment of the agent, one case where we prepopulated the tool with content for a specific, high-demand scenario came when Microsoft made a major announcement regarding employees returning to the office. We knew this policy change would generate a lot of questions from our employees.

In preparation for this, we asked Microsoft 365 Copilot to create a single document that pulled in all the “return to office” material found in its verified HR content database. We then made this document available to the agent. Just by taking that simple step, we saw our user satisfaction ratings in the tool jump from 85 percent to 98 percent for that issue!

In your own deployment, think about what issues and topics generate the most questions from your employees. You can then prepare specific content to address these scenarios, which will increase your chances of success with the agent.

Data security and compliance

Data security was a high priority when we developed our agent, especially because it must necessarily access sensitive HR information on a regular basis. During product development, we made sure that the agent adhered to enterprise-grade security standards, including identity federation, least-privilege access, and encrypted storage.

Because the agent is built on Copilot Studio, it supports robust data-loss prevention features. The agent also complies with regulatory frameworks like General Data Protection Regulation through built-in auditing and data-retention policies.

One of the big advantages that an AI agent has over a static website or similar data source is the ability to personalize responses for each user. At the same time, we had to make sure that the agent had guardrails in place to avoid overexposing sensitive information. This included detailed disclaimers to help call out these kinds of responses and flag them for more careful handling.

Our agent complies fully with our accessibility standards as well. Like all Microsoft products and services, the tool underwent a rigorous review to ensure it was fully accessible for all users.

Responsible AI

Whenever a new AI application is launched, there may be concerns raised about potential challenges regarding bias, safety, and transparency. That’s why the Employee Self-Service Agent follows the Microsoft Responsible AI principles by default.

When you enable the sensitivity topic in your agent, it screens all responses for harassment, abuse, discrimination, unethical behavior, and other sensitive areas. We tested the agent thoroughly for objectionable responses before it was launched to a broad internal audience at Microsoft.

In addition, the agent includes an emotional intelligence (EQ) option. This feature is designed to make responses more empathetic, context-aware, and relevant for diverse user audiences. It analyzes the conversation’s context and tailors the agent’s replies to ensure that users feel understood and valued throughout their session (which could be particularly relevant for any conversations related to sensitive HR topics, such as family leave). The EQ option is customizable and can be turned off by your product admins.

Key takeaways

The following are important considerations for data governance when you deploy your Employee Self-Service Agent:

  • Employee expectations regarding accuracy and relevance are high for employee self-service tools, which makes data governance a key aspect of your deployment.
  • Consider which data repositories are best to incorporate into your agent, and make sure they are up-to-date and well-structured. This process requires a thorough content audit.
  • Pay special attention to the so-called “golden prompts” that make up a large percentage of expected queries. The agent’s answers to these questions should be top-notch.
  • Restructuring content can improve response quality. When we anticipated huge interest in a particular topic, such as workplace policy changes, we restructured our content on that subject and saw a significant jump in user satisfaction.
  • Build your agent to meet or exceed high standards for data security, privacy, and Responsible AI. These are vital concerns for any product that has access to sensitive personal information.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 2: Implementation with intention

Deploying a powerful and versatile tool like the Employee Self-Service Agent is no simple task. It requires guidance and buy-in from top leaders at the company, as well as detailed planning and execution across disparate parts of your organization. Here, we identify some of the key steps that we took here at Microsoft that can help guide you when launching your own self-service agent.

Determine category parameters

One of the first major decisions around implementing the agent is deciding which business function—we call them agent starters—to choose for your initial implementation.

We recommend starting with HR support or IT help (we started with HR). Both agent starters can be deployed into a single Employee Self-Service Agent experience, but they must be deployed one at a time. 

So you know, we’ve built the Employee Self-Service Agent to be connectable with other first- or third-party Copilot agents, enabling a seamless handoff to these agents without having to navigate to other tools or interfaces.

Understanding your deployment steps

There were four essential stages involved in the deployment of our agent, each with multiple steps. Here’s a quick rundown that you can use at your company:

  1. Preparation for deployment
    • Establish roles: Define who will manage, configure, and support the tool, assigning responsibilities to ensure accountability during deployment.
    • Set up your environment: Prepare the necessary hardware, operating system, and network configurations so the agent can run smoothly.
    • Set up third-party system integration: Ensure your infrastructure can securely connect and exchange data with external systems that the agent will need to integrate with.
  2. Installation
    • Install the agent: Deploy the core Employee Self-Service Agent software on the designated servers or endpoints.
    • Install accelerator packages: Add any desired connectors that enable the agent to communicate with commonly used systems for HR, payroll, IT support, etc.
  3. Customization
    • Configure the core agent: Adjust default settings to align with your organization’s policies and workflows.
    • Identify knowledge sources: Specify where the agent will pull information from, such as internal knowledge bases or FAQs.
    • Provide common questions and responses: Add employee FAQs to improve the agent’s ability to respond quickly and accurately.
    • Identify sensitive queries: Flag questions and responses that involve confidential or regulated information to ensure they’ll be handled securely.
  4. Publication
    • Approve the agent: Complete internal reviews and compliance checks to confirm the agent meets your organizational standards before full rollout.
    • Publish the agent: Make the configured agent available to your employees in your production environment.

Customization

The Employee Self-Service Agent operates as a custom agent within Copilot Studio, using our AI infrastructure via the Power Platform. The agent is constructed on a modular architecture that allows you to integrate it with your own enterprise data sources using APIs, prebuilt and custom connectors, and secure authentication mechanisms.

To streamline this integration process, we provide a library of prebuilt and custom connectors through both Copilot Studio and Power Platform. Preconfigured scenarios include connecting to major enterprise service providers such as Workday, SAP SuccessFactors, and ServiceNow. (View the full list of connectors offered by Copilot Studio.)

These connectors facilitate data exchange with the following systems and other agents in this ecosystem:

  • HR information systems
  • IT systems management
  • Identity management
  • Knowledge base platforms

We found that third-party integrations require setup effort and technical expertise across stakeholders in your tenant. Be sure to get buy-in and involve all relevant departments that will be impacted.

Rollout: A phased approach

As previously noted, we started our agent with HR content and then added IT support (we later expanded to include campus services help as well). We rolled the agent out to different groups of employees and geographic regions around the world over the course of months, adding new knowledge sources to the different categories at each step along the way. This gave us an opportunity to gather user data and refine performance of the tool as we went.

Graphic shows the phased rollout of the Employee Self-Service Agent to Microsoft employees in different regions of our global workforce.
We executed a phased rollout of the Employee Self-Service Agent across different regions and countries at Microsoft. As we expanded the audience for the tool, we also added more categories, knowledge sources, and capabilities.

Adding campus support services required us to handle queries and tasks related to dining, transportation, facilities, and similar subjects. This was a challenging addition, because the facilities and real estate space—unlike the HR and IT support areas—doesn’t have many large service providers, which are easier to provide prebuilt connectors for.

One area that did lend itself to prebuilt connectors, however, was facilities ticketing.

Because many of our campus facilities vendors use Microsoft Dynamics 365, we were able to create an out-of-the-box connector in the agent for their ticketing process. You can take advantage of these kinds of preconfigured tools in your deployment.  

Key takeaways

Here are some things to remember when implementing the Employee Self-Service Agent at your organization:

  • Decide which starter agent you will deploy first. We recommend starting with a single agent covering one area (vertical), such as HR or IT support, and then expanding from there.
  • Consider a phased rollout to allow time to refine responses and ramp up the number of topic areas and knowledge sources installed in your agent.
  • Use the prebuilt connectors to make it easier to integrate the agent with your existing systems.We developed customized connectors for major HR and IT service providers and a Microsoft 365 Dynamics connector to integrate with our many facilities vendors around the world.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 3: Driving adoption by breaking old habits

Once upon a time, when our employees needed help with a technical issue or an HR question, they literally picked up the phone and called the relevant internal phone number. That quickly evolved into an email-centered system, where employee questions were sent to a centralized inbox that would then generate a service request. Still later, chat-based help was introduced.

Using AI to handle employee questions and service requests is a natural step in this evolution, as large-language models were built to parse vast data repositories and return the right information (often with the help of multi-turn queries and responses). And by encouraging self-service, an AI agent can help meet employee needs faster while saving the organization’s staffing resources for other needs.

But getting employees to change their habits and use a tool like the Employee Self-Service Agent wasn’t going to be as easy as just flipping a switch. Here’s how we handled this important change management task at Microsoft.

Adoption across verticals

A key principle that we learned during the adoption process was that 80% of our change management activities for the agent are applicable to all our verticals (whether it be HR, IT support, campus facilities, or another category). We didn’t need to reinvent the wheel each time we added to the topics that the agent covered.

This allowed us to create a change management “playbook” that we could use each time we expanded to a new category. So, while roughly 20% of the strategies we used were specific to that vertical, the vast majority were the same, which saved time as we moved through onboarding the different categories.

Leadership is key

To get our employees to change the way they ask for help, we found it essential to get the support of our key leaders, something we refer to as “sponsorship.”

We found that good sponsorship doesn’t just come from your central product, communications, or marketing groups. It is equally vital to invest in relationships with local leadership in different regions as you roll out the agent (especially in multinational companies like ours).

Local leaders understand the various regional intricacies—including language, functionality, and the rhythm of the business—that can help inspire their segments of the workforce to adopt a new tool, and then evangelize it to others in turn. Working closely with these kinds of sponsors will help you pull off a successful adoption campaign.

If you have works councils, be sure to seek out your representatives and solicit their feedback on your agent experience early on. You can help them understand how the agent was developed and trained, then address any concerns they raise.

We’ve found that once our works councils are made aware of the careful processes we go through to protect user privacy, and to ensure compliance with our Responsible AI standards, they become enthusiastic supporters and can help promote agent adoption. (Read more about our experience with our works councils and the Microsoft 365 Copilot rollout.)

Defining your messaging

Work with your internal communications team to come up with a well-planned messaging framework for your agent rollout. Based on our experience, it’s likely you’ll need to communicate across a wide variety of teams and organizations like HR, IT, facilities, finance, and so on.

It’s important to be clear about how you’re positioning the product for your employees. This will allow you to develop both overall messaging for general use, but also content tailored to specific teams or employee roles. The more sophisticated your messaging, the more likely it is to be effective in encouraging user adoption of the agent in their regular workflow.

Listening to feedback

As Customer Zero for the company, our employees are our best testers and sources of feedback during our product development process. The Employee Self-Service Agent was no different, and we continue to gather crucial feedback and user data throughout the internal adoption process.

Because the agent is a tool centered on helping your workers resolve challenges and get quick answers to questions, you’ll want to set up your own systems for capturing their feedback and make sure the agent is meeting a high-quality bar.

We found that setting yourself up for success when it comes to listening to your employees involves two major aspects: Developing and deploying a system for gathering employee sentiment about the product, and then creating a system for analyzing that feedback and funneling the findings back to your IT team.

Some of the types of feedback and methods we used to gather it during the development process included:

  • User-testing data
  • User satisfaction ratings
  • User surveys, interviews and other research
  • Voice of the customer (in-product feedback)
  • Pilot projects and focus groups (smaller segments of users)
  • IT support incidents
  • Usage data and telemetry
  • Community-based early adopter feedback (similar to our Copilot Champs community)
  • Social media feedback and comments

You can choose from among these options to set up your own feedback mechanisms, or come up with something customized to your implementation.

Calibrating your usage goals

Remember that the Employee Self-Service Agent is not an all-purpose AI tool like Microsoft 365 Copilot, which your employees might use a dozen times a day. Instead, they may only need assistance from HR or IT support, tools, and information sources a few times a week (or even less). Your usage targets should be calibrated accordingly.

At the same time, the more categories of assistance you add to the agent, the more your usage levels can grow—along with user expectations.

When we decided to add campus support (dining, transportation, and facilities-related needs and queries), one of the motivators was to provide information that users might need on a more regular basis. This addition helped us increase adoption and build daily usage habits for the agent among our employees.

Making the agent your front door for employee assistance

Your employees may have longstanding habits around the ways that they seek assistance, such as moving quickly to email a service request, or immediately engaging a live support technician. There might even be someone helpful in the office next to them that they lean on for IT support. We’re aware that breaking such habits can be a challenge.

That’s why we decided to change our own employee-assistance workflows. In the case of HR, we are planning to remove the option to email a centralized alias for help, which was the default in the past. This forcing function will instead prompt our employees to turn to the agent first for assistance, creating a “front door” for all our HR service requests.

For our IT support function, we are switching from a Virtual Agent chatbot to the Employee Self-Service Agent, which should provide users with a richer experience and a higher rate of resolution.

Of course, our main goal is for the agent to handle an employee’s issue without having to seek further assistance. But what happens when the agent cannot resolve their problem or handle their request? That’s why we’ve also implemented a “smooth handoff”—either to create a service request or connect the user to a live agent for specialized assistance.

There are three key steps in this process:

  1. The Employee Self-Service Agent can identify when the user has reached a point where they need to move to a higher level of assistance via a live agent or a service request. (Note that we also allow the employee to make that determination for themselves.)
  2. We then give them different options for how they want to connect to live support.
  3. When the employee is transferred to a live technician, the Employee Self-Service Agent is able to pass on the chat history from its session with the user. That way, the technician or staff support can quickly get up to speed on the situation, see what the employee has already asked about and tried, and start helping them immediately.

Enabling the employee to quickly and smoothly transition to a higher level of support without leaving the chat increases user satisfaction and makes them more likely to return to the agent the next time they need assistance.

Strategic outreach to employees

Of course your workers, like ours, are busy with their day-to-day job functions. They may be resistant to trying a new tool or going through special training on how to access employee assistance. Or they may just not know about it.

Because of our regionally phased rollout of the agent, email was one of the most effective tools we used to connect with specific audiences and make them aware of the tool. With specific email lists, we could make sure that only employees in that phase of the rollout were seeing the message.

A key aspect of getting our employees to adopt any new tool is reinforcement—the process of sustaining behavior change by providing ongoing incentives, recognition, and support. Some of the reinforcement strategies we used for the agent included:

  • Targeted communications: Emails and organizational messages invited employees to try the agent as they received access
  • Multi-channel campaigns: Promotion of the agent via portals, newsletters, digital signage, and more to keep it at the forefront of employee minds
  • Training: Workshops and micro-learning sessions about the agent
  • Social campaigns: Posts highlighting the tool to increase awareness and gather employee feedback (see details below)
  • Leadership support: Managers modeled usage of the agent and promoted it regularly
  • Processes: The tool was part of regular employee workflows
An example of a fun Viva Engage post that our internal communications team created to encourage daily usage of the Employee Self-Service Agent during the holiday season.

One very important communications channel that we used in our adoption efforts was Microsoft Viva Engage. We set up a private Engage community for the Employee Self-Service Agent, then populated it with each new wave of users as they were given access to the tool (eventually all were given access when the tool went companywide).

We used this channel for various kinds of messaging:

  • General product awareness
  • Updates on new or changing functionality
  • Answering questions or addressing frustrations (two-way dialogue between users and the product team)
  • Fun and helpful “tips and tricks” that users could try (these could come from the product team, leadership, or individual product “champions”)

We also inserted messages about the new agent into our regular communications with different audiences, including HR professionals, IT support personnel, and internal comms staff at the company. And we regularly messaged company leaders about it, so they could encourage their teams and direct reports to support the effort and evangelize for the tool.

One thing we did was make clear to our employees that even though the agent was not able to handle an issue today, it might be able to in a month or two. That’s why ongoing communications to users was important.”

Prerna Ajmera, general manager, HR digital strategy and innovation

Of course, as a natural language chat tool, the Employee Self-Service Agent doesn’t require formalized training. The product itself is designed to guide users and allow them to experiment, simply by stating their needs in plain language. Most employees will already be familiar with AI tools like Microsoft 365 Copilot, so effectively using an AI-powered employee-assistance agent should be a low bar to clear.

Managing expectations

Your Employee Self-Service Agent rollout will be an ongoing journey as you add topic areas, functionalities, and other product features. Your product roadmap will evolve as you learn more about what your employees need with this kind of AI solution.

One factor to consider is how to set realistic user expectations about what the agent can do while the product matures and improves. As we gradually rolled out the tool, we messaged that the agent was in “early preview,” which helped avoid employee disappointment when it couldn’t handle a specific request.

“One thing we did was make clear to our employees that even though the agent was not able to handle an issue today, it might be able to in a month or two,” Ajmera says. “That’s why ongoing communications to users was important, as new capabilities were added and speed and accuracy improved.”

We also created messaging for early users indicating that their testing was an integral part of making the tool more effective. This created a positive feedback loop while also keeping employee expectations reasonable.

How we measured success

Carefully tracking and analyzing your success metrics throughout your development and release of the product is a high priority. Without this step, you are working in the dark.

At Microsoft, we identify the key performance indicators (KPIs) for a particular product and then use them as our North Star for any internal release. But the specifics of those KPIs can vary from product to product.

Graphic shows the improved success rates that employees have when seeking assistance from the Employee Self-Service Agent versus traditional support channels.
Early results from our internal deployment of the Employee Self-Service Agent showed marked increases in success rates when users sought assistance from an AI tool as compared with existing support channels.

For example, measuring the monthly average user (MAU) statistics might be extremely important for an all-purpose productivity tool like Microsoft 365 Copilot. But for an employee-assistance tool, the goal is not necessarily regular use, because employees aren’t constantly facing challenges that require help (we hope). Usage statistics may also be affected by certain events or cyclical needs, such as annual employee reviews or a major technology change (like a significant Windows update).

With this in mind, we identified certain key metrics for the Employee Self-Service Agent. In this case, the top KPIs included:

  • Percentage of support tickets deflected
  • Net satisfaction score
  • Latency period
  • Reliability
  • Total time savings
  • Total cost savings
  • Identified and prioritized issues (reported back to product group)

Overall, we focused on the rate at which employees were able to resolve issues without opening a support ticket, as this would likely generate the greatest return on time and cost savings. We came up with an overall target across the different verticals of 40% ticket deflection, and we’re making solid progress toward this goal as we continue to refine and improve the agent.

Part of our measurement process is a monthly progress meeting of key project stakeholders, where all KPIs are evaluated to see if our targets are being met. If the results do not meet expectations, we identify the potential causes and discuss what adjustments need to be made to address these shortfalls.

Key takeaways

Here are some key things to remember when it comes to adoption efforts for your Employee Self-Service Agent:

  • Don’t reinvent the wheel. Most of your change management and adoption strategies for the agent will be the same across different regions and help categories.
  • Line up product sponsors. Finding leaders and others across the organization to help you promote the Employee Self-Service Agent within their own groups, functions, and regions can make a big difference in gaining employee trust and encouraging adoption.
  • Set up proper listening channels. You’ll want to gather as much feedback as possible from your employees as you roll out the agent so you can understand what is working well and what needs improvement. This kind of feedback loop can also make your employees feel heard and help them shape the tool.
  • Make the shift to agent-first help. Employee habits for seeking assistance can be resistant to change. We decided that turning off the “email to create a service ticket” workflow was a great way to nudge our workers to recognize the agent as the first option for their assistance needs.
  • Be strategic in your communications. Use tools like email, Viva Engage, and other appropriate communications channels to target your communications and encourage a two-way conversation with employees about the agent. Sharing fun tips and encouraging peer support are other ways to increase awareness and engagement with product.
  • Identify your key metrics. We determined our benchmarks for success for this particular type of agent, then tracked them and made the results available to key stakeholders. This allowed us to measure the impact and effectiveness of the product.

Learn more

How we did it at Microsoft

Although some of the blog posts below are about adoption efforts related to Microsoft 365 Copilot, they can give you ideas on how we promote internal adoption of agentic AI products at Microsoft.

Further guidance for you

Begin your journey with the Employee Self-Service Agent

Agentic AI offers incredible promise to transform employee productivity, giving individuals access to powerful tools that enable them to accomplish more. We believe the Employee Self-Service Agent is another step along that path, allowing workers to get instant help with tasks that used to be cumbersome and time-consuming.

Photo of Fielder

“We’re excited to get the Employee Self-Service Agent out and into the hands of our customers, so that they can reap the same benefits that we’re already seeing from it. As we continue to refine the product and expand the number of verticals it can cover, we expect to realize exponential efficiency gains and capture even more cost savings across our entire organization.”

Now that you’ve read about our experience deploying the tool, it’s time to start your own journey. Successful implementation means your people will spend less time on the phone with support staff or hunting through web pages and other resources for help with routine employment tasks and more time devoted to their productive work, reducing job-related pain points and frustrations.

You can benefit from the lessons we’ve learned and the many helpful features and capabilities that we’ve built into this product, all of which are designed to make your implementation as fast, easy, and effective as possible.

“We’re excited to get the Employee Self-Service Agent out and into the hands of our customers, so that they can reap the same benefits that we’re already seeing from it,” says Brian Fielder, vice president of Microsoft Digital. “As we continue to refine the product and expand the number of verticals it can cover, we expect to realize exponential efficiency gains and capture even more cost savings across our entire organization.”

Key takeaways

Here are some of the essential top-level learnings we gleaned from our deployment of the Employee Self-Service Agent, which you should keep in mind as you start out on your own deployment path:

  • Identify and engage the right people. You’ll need buy-in and advocacy from leaders across the organization; the involvement of key stakeholders from HR, IT, legal, and compliance; and technical guidance from admins, license administrators, environment makers, and knowledge-base subject matter experts.
  • Develop your plan. Understand the major phases of governance, implementation, and adoption of the tool, and make sure that you have adequate resources and support for each phase.
  • Verify the quality of your content. Your chances of success will be better if you undertake a thorough content assessment to address the currency, accuracy, and structure of all relevant knowledge bases. Pay particular attention to the topics and tasks that are in greatest demand by employees when they access help services.
  • Consider a phased rollout. Releasing your Employee Self-Service Agent to progressively larger groups of workers across your organization allows you to gather data and feedback and improve the performance and relevance of the agent over time. You can also expand the number of categories that your agent covers as you go, increasing the impact and appeal of the tool.
  • Communicate strategically to promote adoption. Convincing employees to break longstanding habits when seeking help is a challenge. Email is helpful for targeting specific groups of employees, but be sure to use tools like Viva Engage to create community, answer questions, provide fun tips and tricks, and announce new capabilities and options.
  • Set clear goals and measure against them. Come up with a targeted set of KPIs that reflect your organization’s needs and aspirations, then develop a plan to capture data for each of these indicators and a regular reporting cadence to keep stakeholders informed of progress toward your goals.

Learn more

How we did it at Microsoft

Try it out

We’d like to hear from you!

The post Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success appeared first on Inside Track Blog.

]]>
22492
Microsoft CISO advice: Explore our four tips for securing your customer support ecosystem http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-explore-our-four-tips-for-securing-your-customer-support-ecosystem/ Thu, 12 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22635 Microsoft business operations teams know all too well that cyberattackers seek to exploit customer support pathways. Tools that can unlock customer accounts or aid in troubleshooting issues in complex environments are a rich target. “The path attackers really like to use is to compromise support tooling and laterally move to your core tooling,” says Raji […]

The post Microsoft CISO advice: Explore our four tips for securing your customer support ecosystem appeared first on Inside Track Blog.

]]>
Microsoft business operations teams know all too well that cyberattackers seek to exploit customer support pathways. Tools that can unlock customer accounts or aid in troubleshooting issues in complex environments are a rich target.

“The path attackers really like to use is to compromise support tooling and laterally move to your core tooling,” says Raji Dani, Deputy Chief Information Security Officer (CISO) for Microsoft business operations.

Dani and her team focus on understanding and mitigating the risks within customer support operations. In this video, she shares principles and practices for every business that relies on online tools in their customer support ecosystem.

Watch this video to see Raji Dani discuss four customer support ecosystem security principles. (For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=rJ87jjz3vvo .)

Key takeaways

Here are best practices you can apply to your customer support ecosystem:

  • Create dedicated and isolated support identities. Use standardized support identities with phish-resistant multifactor authentication based in a separate identity ecosystem.
  • Implement least privilege and enforce device protection. Only grant the access needed for a given task and nothing more.
  • Ensure tooling does not have high privilege access to customer data. Architect secure tools and manage service-to-service trust and high privileged access.
  • Implement strong telemetry. Anomalous patterns in logs and telemetry data are often the first clue a cyberattack is underway.

The post Microsoft CISO advice: Explore our four tips for securing your customer support ecosystem appeared first on Inside Track Blog.

]]>
22635