Microsoft 365 Copilot Archives - Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/tag/microsoft-365-copilot/ How Microsoft does IT Thu, 09 Apr 2026 16:34:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 137088546 Conditioning our unstructured data for AI at Microsoft http://approjects.co.za/?big=insidetrack/blog/conditioning-our-unstructured-data-for-ai-at-microsoft/ Thu, 09 Apr 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23020 Anyone who has ever stumbled across an old SharePoint site or outdated shared folder at work knows firsthand how quickly documentation can fall out of date and become inaccurate. Humans can usually spot the signs of outdated information and exclude it when answering a question or addressing a work topic. But what happens when there’s […]

The post Conditioning our unstructured data for AI at Microsoft appeared first on Inside Track Blog.

]]>
Anyone who has ever stumbled across an old SharePoint site or outdated shared folder at work knows firsthand how quickly documentation can fall out of date and become inaccurate.

Humans can usually spot the signs of outdated information and exclude it when answering a question or addressing a work topic. But what happens when there’s no human in the loop?

At Microsoft, we’ve embraced the power and speed of agentic solutions across the enterprise. This means we’re at the forefront of developing and implementing innovative tools like the Employee Self-Service Agent, a chat-based solution that uses AI to address thousands of IT support issues and human resources (HR) queries every month—queries that used to be handled by humans. Early results from the tool show great promise for increased efficiency and time savings.

In developing tools like this agent, we were confronted with a challenge: How do we make sure all the unstructured data the tool was trained on is relevant and reliable?

Many organizations are facing this daunting task in the age of AI. Unlike structured data, which is well organized and more easily ingested by AI tools, the sprawling and unverified nature of unstructured data poses some tricky problems for agentic tool development. Tackling this challenge is often referred to as data conditioning.

Read on to see how we at Microsoft Digital—the company’s IT organization—are handling data conditioning across the company, and how you can follow our lead in your own organization.

How AI has changed the game

We already fundamentally understand that the power of AI and large language models has changed the game for many work tasks. The way employee support functions is no exception to this sweeping change.

A photo of Finney.

“A tool like the Employee Self-Service Agent doesn’t know if something is true or false—it only sees information it can use and present. That’s why stale or outdated information is such a risk, unless you manage it up front.”

David Finney, director of IT Service Management, Microsoft Digital

Instead of relying on human agents to answer employee questions or resolve issues, we now have AI agents trained on vast corpora of data that can find the answer to a complex question in seconds.

But in our drive to give these tools access to everything they might need, they sometimes end up consuming information that isn’t helpful.

“A tool like the Employee Self-Service Agent doesn’t know if something is true or false—it only sees information it can use and present,” says David Finney, director of IT Service Management. “That’s why stale or outdated information is such a risk, unless you manage it up front.”

Before AI, support teams didn’t need to worry as much about the buried issues with unstructured content because a human could generally spot it or filter it out manually. After we turned these tools loose, they began reading everything, including:

  • Older or hidden SharePoint content that humans would never find—but AI can
  • Large knowledge base articles with buried incorrect information
  • Region-specific content that’s not properly labeled

“For example, humans never saw the old, decommissioned SharePoint sites because they were automatically redirected,” says Kevin Verdeck, a senior IT service operations engineer. “But AI definitely could find them, and it surfaced ancient information that we didn’t even know was still out there.”

Data governance is the key

A major part of the solution to this problem is better governance. We had to get a handle on our data.

A photo of Cherel.

“We needed to determine the owners of the sites and then establish processes for reviewing content, updating it, and defining how it should be structured. I would highly encourage that our customers think about governance first when they are launching their own AI tools, because everything flows from it.”

Olivier Cherel, senior business process manager, Microsoft Digital

The first step was a massive cleanup effort, including removing decommissioned SharePoint sites and deleting references to retired programs and policies. The next step was making sure all content had ownership assigned to establish who would be maintaining it. This was followed by setting up schedules for regular content updates (lifecycle management).

Governance was the first priority for IT content, according to Olivier Cherel, a senior business process manager in Microsoft Digital.

“We had no governance in place for all the SharePoint sites, which were managed by the various IT teams,” Cherel says. “We needed to determine the owners of the sites and then establish processes for reviewing content, updating it, and defining how it should be structured. I would highly encourage that our customers think about governance first when they are launching their own AI tools, because everything flows from it.”

Content governance was also a huge challenge for other support areas, such as human resources. A coordinated approach was needed.

“HR content is vast, distributed across multiple SharePoint sites, and not everything has a clear owner,” says Shipra Gupta, an engineering PM lead in Human Resources who worked on the Employee Self-Service Agent project. “So, we collaborated with our content and People Operations teams to create a true content strategy: one source of truth, no duplication, with clear ownership and lifecycle management.”

Cherel observes that this process forces teams to think about their support content in a totally different way.

“People realize they need a new function on their team: content management,” he says. “You can’t simply rely on the knowledge found in the technicians’ heads anymore.”

Adding structure to the unstructured data

The simple truth is that part of what makes unstructured data so difficult for agentic AI tools to deal with is that it’s disorganized.

A photo of Gupta.

“Our HR Web content already had tagging for many policy documents, which helped us get started. But it wasn’t consistent across all content, so improved tagging became a big part of our governance effort.”

Shipra Gupta, engineering PM lead, Human Resources

AI works best with content that has as many of the following characteristics as possible:

  • Document structure, including:
    • Clear headers and sections
    • Page-level summaries
    • Ordered steps and lists
    • Explicit labels for processes
    • HTML tags (which AI can see, but humans can’t)
  • Structured metadata, including:
    • Region codes (e.g., US-only policies)
    • Device-specific tags
    • Secure device classification
    • Country-based hardware procurement policies and HR rules

This kind of formatting and metadata allows the AI tool to more clearly parse and sort the information, meaning its answers are going to have a much higher accuracy level (even if it might be a little slower to return them).

“A good example here is tagging,” Gupta says. “Our HR Web content already had tagging for many policy documents, which helped us get started. But it wasn’t consistent across all content, so improved tagging became a big part of our governance effort.”

Be sure that as part of your content review, you’re setting aside the time and resources to add this kind of structure to your unstructured data. The investment will pay off in the long run.

Using AI to help condition data for use

As AI tools grow more sophisticated, we’re using them to directly work on AI-related challenges. This includes using AI on the challenge of unstructured data itself.

“Right now, these efforts are primarily human-led, but we are applying AI to, for example, help write knowledge base articles,” Cherel says. “Also, we’re starting to use AI to determine where we have content gaps, and to analyze the feedback we’re getting on the tool itself. If we just rely on humans, it’s not going to scale. We need to leverage AI to stay on top of things and keep improving the tools.”

Essentially, the future of such technology is all about using AI to improve itself.

“We’re looking at building an agent to help validate content,” Finney says. “We can use it to check for outdated references, old processes, or abandoned terms that are no longer used. Essentially, we’ll have AI do a readiness check on the content that it is consuming.”

Ultimately, the better the data is conditioned, the more accurate and relevant the agent’s responses will be. And that will make the end user—the truly important human in the loop—much happier with the final outcome.

Key takeaways

We’ve highlighted some insights to keep in mind as you consider how to condition your own organization’s data for ingestion by AI tools:

  • Unstructured data becomes a business risk when AI is in the loop. AI agents consume everything they can access, including outdated, hidden, or conflicting content, making data conditioning a critical prerequisite for agentic solutions.
  • AI highlights content issues that were previously invisible. Decommissioned SharePoint sites, outdated policies, and region-specific content without proper labels all became visible after AI agents began scanning across systems.
  • Governance is a vital part of the conditioning process. Assigning clear content ownership and establishing lifecycle management are essential steps in ensuring the content being fed to AI tools is of high quality and is well managed.
  • Adding structure to data dramatically improves AI accuracy. Clear document formatting, consistent tagging, and rich metadata help AI agents return more relevant, reliable answers.
  • AI will increasingly be used to condition and validate the data it consumes. Microsoft is already exploring using AI to identify content gaps, analyze feedback, and flag outdated information, creating a continuous improvement loop that can scale faster than human review alone.

The post Conditioning our unstructured data for AI at Microsoft appeared first on Inside Track Blog.

]]>
23020
Harnessing AI: How a data council is powering our unified data strategy at Microsoft http://approjects.co.za/?big=insidetrack/blog/harnessing-ai-how-a-data-council-is-powering-our-unified-data-strategy-at-microsoft/ Thu, 09 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23030 Information technology is an ever-evolving landscape. Artificial Intelligence is accelerating that evolution, providing employees with unprecedented access to information and insights. Data-driven decision making has never been more critical for businesses to achieve their goals. In light of this priority, we have established a Microsoft Digital Data Council to help accelerate our companywide AI-powered transformation. […]

The post Harnessing AI: How a data council is powering our unified data strategy at Microsoft appeared first on Inside Track Blog.

]]>
Information technology is an ever-evolving landscape. Artificial Intelligence is accelerating that evolution, providing employees with unprecedented access to information and insights. Data-driven decision making has never been more critical for businesses to achieve their goals.

In light of this priority, we have established a Microsoft Digital Data Council to help accelerate our companywide AI-powered transformation.

Our data council is a cross-functional team with representation from multiple domains within Microsoft, including Microsoft Digital, the company’s IT organization; Corporate, External, and Legal Affairs (CELA); and Finance.

A photo of Tripathi.

“By championing robust data governance, literacy, and responsible data practices, our data council is a crucial part of our AI-powered transformation. It turns enterprise data into a strategic capability that fuels predictive insights and intelligent outcomes across the organization.”

Naval Tripathi, principal engineering manager, Microsoft Digital

Our data council’s mission is to drive transformative business impact by establishing a cohesive data strategy across Microsoft Digital, empowering interconnected analytics and AI at scale. Our vision is to guide our organization toward Frontier Firm maturity through a clear blueprint for high-quality, reliable, AI-ready data delivered on trusted, scalable platforms.

“By championing robust data governance, literacy, and responsible data practices, our data council is a crucial part of our AI-powered transformation,” says Naval Tripathi, principal engineering manager in Microsoft Digital. “It turns enterprise data into a strategic capability that fuels predictive insights and intelligent outcomes across the organization.”

Enterprise IT maturity

This article is part of series on Enterprise IT maturity in the era of agents. We recommend reading all four of these articles to gain a comprehensive view of how your organization can transform with the help of AI and become a Frontier Firm.

  1. Becoming a Frontier Firm: Our IT playbook for the AI era
  2. Enterprise AI maturity in five steps: Our guide for IT leaders
  3. The agentic future: How we’re becoming an AI-first Frontier Firm at Microsoft
  4. AI at scale: How we’re transforming our enterprise IT operations at Microsoft (this story)

Our evolving data strategy

Over the past two decades, we at Microsoft—along with other large enterprises—have continuously evolved our data strategies in search of the right balance between control and agility. Early approaches were highly decentralized, with different teams owning and managing their own data assets. While this enabled local optimization, it also resulted in inconsistent quality and limited enterprise-wide insight.

Our subsequent shift toward centralized data platforms brought much-needed standardization, security, and scalability. However, as data platforms grew more sophisticated, ownership often drifted away from the business domains closest to the data, slowing responsiveness and diluting accountability.

Today, we and other leading companies are embracing a more balanced, federated approach, often described as a data mesh. Rather than forcing all our data into a single centralized system or allowing unchecked decentralization, the data mesh formalizes domain ownership while embedding governance, quality, and interoperability directly into shared platforms.

With this approach, our domain teams publish data as well-defined, discoverable products, while common standards for security, metadata, and compliance are enforced through automation rather than manual processes. This model preserves enterprise trust and consistency without sacrificing speed or autonomy.

By adopting a data mesh mindset, we can scale analytics and AI more effectively across the organization while still keeping ownership closely connected to the business focus. The result is a system that supports innovation at the edges, strong governance at the core, and seamless collaboration across domains, enabling the transformation of data from a technical asset to a strategic, enterprise-wide capability.

Quality, accessibility, and governance

To scale enterprise data and AI, organizations must first ensure their data is trusted, discoverable, and responsibly governed. At Microsoft Digital, our data strategy is designed to create data foundations that power intelligent applications and effective decision making across the company.

A photo of Uribe.

“High-quality, well-governed data is essential to accelerate implementation and adoption of AI tools. Data quality, accessibility, and governance are imperatives for AI systems to function effectively, and recognizing that is propelling our data strategy.”

Miguel Uribe, principal PM manager, Microsoft Digital

By implementing a data mesh strategy at scale, we aim to unlock valuable data insights and analytics, enabling advanced AI scenarios. Our data council focuses on three core dimensions that make AI-ready data possible:

  • Quality: Making sure enterprise data is reliable and complete
  • Accessibility: Enabling secure and discoverable access to data
  • Governance: Protecting and managing our data responsibly

Together, these dimensions form the foundation for scalable innovation and AI-powered data use. They connect data silos and ensure consistent, high‑quality access across the enterprise—enabling both humans and AI systems to work from the same trusted data foundation. As AI use cases mature, this foundation allows AI agents to retrieve and reason over data through enterprise endpoints, while supporting advanced analytics, data science, and broader technology.

“High-quality, well-governed data is essential to accelerate implementation and adoption of AI tools,” says Miguel Uribe, a principal PM manager in Microsoft Digital. “Data quality, accessibility, and governance are imperatives for AI systems to function effectively, and recognizing that is propelling our data strategy.”

Quality

AI-ready data is available, complete, accurate, and high-quality. By adopting this standard, our data scientists, engineers, and even our AI agents are better able to locate, process, and govern the information needed to drive our organization and maximize AI efficiencies.

By utilizing Microsoft Purview, our data council can oversee the monitoring of data attributes to ensure fidelity. It also monitors parameters to enforce standards for accuracy and completeness.

Accessibility

Ensuring that our employees get access to the information they need while prioritizing security is a foundational element of our enterprise data strategy. Microsoft Fabric allows us to unify our organization’s siloed data in a single “mesh” that enables advanced analytics, data science, data visualization and other connected scenarios.

Microsoft Purview then gives us the ability to democratize that data responsibly. By implementing a data mesh architecture, our employees can work confidently, unencumbered by siloed or inaccessible data, and with the assurance that the data they’re working with is secure.

A graphic shows how the data mesh architecture allows employees to access data they need, with platform services and data management zones surrounding this architecture.
The data mesh architecture enables our employees to do their work efficiently while preventing the data they’re working on from becoming siloed.

The data mesh connects and distributes data products across domains, enabling shared data access and compute while scaling beyond centralized architectures.

Platform services are standardized blueprints that embed security, interoperability, policies, standards, and core capabilities—providing guardrails that enable speed without fragmentation.

Data management zones provide centralized governance capabilities for policy enforcement, lineage, observability, compliance, and enterprise-wide trust.  

Governance

As organizations scale AI capabilities, strong governance becomes essential to ensure security, compliance, and ethical data use. Data governance—which includes establishing data policies, ensuring data privacy and security, and promoting ethical AI usage—is critical, as is compliance with General Data Protection Regulation (GDPR) and Consumer Data Protection Act (CDPA) regulations, among others.

However, governance is not only a technical capability; it’s also a cultural commitment.

Responsible data use must be embedded into the way teams manage data and build AI solutions. Through Microsoft Purview, we implemented an end-to-end governance framework that automates the discovery, classification, and protection of sensitive data across the enterprise data landscape.

This unified approach allows teams to innovate confidently, knowing that the data powering their insights and AI systems is trusted and protected, as well as responsibly managed.

“AI systems are only as reliable as the data that powers them,” Uribe says. “By investing in trusted and well-managed data, we accelerate not only the adoption of AI tools but our ability to generate meaningful insights and intelligent outcomes.”

The data catalog as the discovery layer

By serving as a common discovery layer for humans and AI, the data catalog ensures that governance translates directly into speed, accuracy, and trust at scale.

A unified data strategy only succeeds if both people and AI systems can consistently find the right data. At Microsoft, this is enabled by our enterprise data catalog, which operationalizes the standards set by our data council. 

For business users, the catalog provides intuitive search, ownership transparency, and trust signals—enabling confident self‑service analytics. For AI agents, the same catalog exposes machine‑readable metadata, allowing agents to programmatically discover canonical datasets, validate schema and freshness, and respect governance constraints.

Our role as Customer Zero

In Microsoft Digital, we operate as Customer Zero for the company’s enterprise solutions, so that our customers don’t have to.

That means we do more than adopt new products early. We deploy them at enterprise-scale, operate them under real‑world constraints, and hold them to the same standards our customers expect. The result is more resilient, ready‑to‑use solutions and a higher quality bar for every enterprise customer we serve.

A photo of Baccino.

“When we engage product teams with real telemetry from how data is created, governed, and consumed at scale, we move the conversation from theory to execution. That’s how enterprise readiness becomes real.”

Diego Baccino, principal software engineering manager, Microsoft Digital

Our data council embodies this Customer Zero mindset through its Enterprise Readiness initiative. By engaging product engineering as a unified enterprise voice, the council drives strategic conversations that surface operational blockers, influence roadmap prioritization, and ensure new and existing data solutions are truly ready for enterprise use.

These learnings are then shared broadly across Microsoft Digital to accelerate adoption, reduce duplication, and scale proven patterns across teams.

“When we engage product teams with real telemetry from how data is created, governed, and consumed at scale, we move the conversation from theory to execution,” says Diego Baccino, a principal software engineering manager in Microsoft Digital and a member of the council. “That’s how enterprise readiness becomes real.”

This work is deeply integrated with our AI Center of Excellence (CoE), where Customer Zero principles are applied to accelerate AI outcomes responsibly. Together, the AI CoE and the data council focus on improving data documentation and quality—foundational capabilities that are required to make AI feasible, trustworthy, and scalable across the enterprise.

By grounding AI innovation in measurable data quality and governance standards, Microsoft Digital ensures that experimentation can safely mature into production‑ready solutions. The partnership between our data council, our AI CoE, and our Responsible AI (RAI) Council is essential to our broader data and AI strategy.

“AI readiness isn’t aspirational—it’s operational,” Baccino says. “By measuring the health of our data, setting clear quality baselines, and using those signals to guide product and platform decisions, we turn data into a strategic asset and AI into a repeatable capability.”

Together, these teams exemplify what it means to be Customer Zero: Transforming enterprise experience into action, governance into acceleration, and data into durable competitive advantage.

Advancing our data culture

Our data council plays a pivotal role in advancing the organization transition from data literacy to enterprise data and AI capability. In conjunction with our AI CoE, it creates curricula and sponsors learning pathways, operational practices, and community programs to equip our employees with the skills and mindset required to thrive in a data- and AI-centric world.

While early efforts focused on improving data literacy, our data council ’s mission has evolved to enable data and AI capability at scale together with our AI CoE—where employees not only understand data but can effectively apply it to build, operate, and govern intelligent solutions.

“Our focus is not just teaching our teams about data. It is enabling employees to apply data to create AI-driven outcomes. When teams understand how data powers AI systems, they can make better decisions, design better products, and build more responsible AI experiences.”

Miguel Uribe, principal product manager, Microsoft Digital

Our curriculum includes high-level courses on data concepts, applications, and extensibility of AI tools like Microsoft 365 Copilot, as well as data products like Microsoft Purview and Microsoft Fabric.

By facilitating AI and data training, offering internally focused data and AI certifications, and internal community engagement, our council ensures that employees develop the capabilities required to responsibly build and operate AI-powered solutions. Achieving data and AI certifications not only promotes career development through improved data literacy, it also enhances the broader data-driven culture within our organization.

“We recognize that AI capability is built when data skills are applied directly to real AI scenarios and business outcomes—not when learning exists in isolation,” Uribe says. “Our focus is not just teaching our teams about data; it is enabling employees to apply data to create AI‑driven outcomes. When teams understand how data powers AI systems, they can make better decisions, design better products, and build more responsible AI experiences.”

Lessons learned

Our data council was created to develop and execute a cohesive data strategy across Microsoft Digital and to foster a strong data culture within our organization. Over time, several critical lessons have emerged.

Executive sponsorship enables transformation

Executive sponsorship is a key element to ensure implementation and adoption of a data strategy. Our leaders are committed to delivering and sustaining a robust data strategy and culture and have been effective champions of the council’s work.

“Leadership provides support and reinforcement of the council’s mission, as well as guidance and clarity related to diverse organizational priorities,” Baccino says.

Cross-functional collaboration accelerates impact

Our council’s work has also benefited from the diverse representation offered by different disciplines across our organization. Embracing diverse perspectives and understanding various organizational priorities is critical to implementing a successful data strategy and culture in a large and complex organization like Microsoft Digital.

Modern platforms allow for scalable AI productivity

Technology and architecture also play a critical role in enabling enterprise data and AI capability. Platforms like Microsoft Purview and Microsoft Fabric provide the governance, discovery, and analytics infrastructure required to create trusted, AI-ready data ecosystems.

Combined with strong leadership support and community engagement, these platforms allow our organization to move beyond isolated data projects toward connected, enterprise-wide intelligence.

As our organization continues to evolve, our data council’s strategic work and valuable insights will be crucial in shaping the future of data-driven decision making and AI transformation at Microsoft.

Key takeaways

Here are some things to keep in mind as you contemplate forming a data council to help you manage and scale AI impacts responsibly at your own organization:

  • A data mesh strikes the balance enterprises have been chasing. By formalizing domain ownership while enforcing standards through shared platforms, you avoid both chaotic decentralization and slow, over-centralized control.
  • Governance is an accelerator when it’s automated and embedded. Using platforms like Microsoft Purview and Microsoft Fabric, governance shifts from a manual gatekeeping function to a built‑in capability that enables faster, trusted analytics and AI.
  • AI systems are only as strong as their discovery layer. A unified enterprise data catalog allows both people and AI agents to find, trust, and use data consistently—turning standards into operational speed.
  • Customer Zero turns theory into enterprise‑ready execution. By operating its own data and AI platforms at scale, Microsoft Digital provides real telemetry and practical feedback that directly shapes product readiness.
  • Building AI capability is a cultural effort, not just a technical one. Our data council’s focus on applied learning, certification, and real-world AI scenarios ensures data skills translate into durable business outcomes.
  • AI scale exposes the cost of fragmented data ownership. A data council cuts through silos by aligning priorities, resolving tradeoffs, and concentrating investment on the data assets that matter most for AI impact.
  • Shared metrics create shared ownership. Publishing data quality and AI‑readiness scores at the leadership level reinforces accountability and positions data as a core enterprise asset.

The post Harnessing AI: How a data council is powering our unified data strategy at Microsoft appeared first on Inside Track Blog.

]]>
23030
Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft http://approjects.co.za/?big=insidetrack/blog/responsible-ai-why-it-matters-and-how-were-infusing-it-into-our-internal-ai-projects-at-microsoft/ Thu, 26 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=19289 Like the computer itself and electricity before it, AI is a transformational technology. It’s providing never-before-seen opportunities to reimagine productivity, address major social challenges, and democratize access to technology and knowledge. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic […]

The post Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft appeared first on Inside Track Blog.

]]>
Like the computer itself and electricity before it, AI is a transformational technology. It’s providing never-before-seen opportunities to reimagine productivity, address major social challenges, and democratize access to technology and knowledge.

As AI reshapes how we work and live, it brings with it both transformative potential and complex challenges. Across the industry, concerns about bias, safety, and transparency are growing.

At Microsoft, we believe that realizing AI’s benefits requires a shared commitment to responsibility—one we take seriously. As a result, we aren’t just creating AI solutions. We’re taking the lead on infusing responsible AI principles into our technology and organizational practices.

Prioritizing responsible AI across Microsoft

The most impressive AI-powered capabilities in the world mean nothing if people don’t trust the technology. Microsoft and many of our customers across all industries are working to strike the right balance between innovation and responsibility.

“We’re on a multi-year journey born out of the need to support innovation—and do it in a way that builds trust. Along the way, we’ve continued to iterate and evolve the program through a series of building blocks.”

Mike Jackson, head of AI Governance, Enablement, and Legal, Microsoft Office of Responsible AI

IT leaders and CXOs aren’t just deploying AI tools. They’re also thinking of the right guardrails to implement around those tools as their organizations mature. Meanwhile, developers and deployers want to be sure they’re building and implementing AI solutions within the bounds of responsibility.

As an organization that’s mapping the frontier of AI while creating business-ready tools for our customers, Microsoft is shaping the global conversation on responsible AI. We don’t only accomplish that through policy and governance, but also by embedding responsibility into the ways we build, deploy, and scale AI.

Laying the foundation for this work is the duty of our Office of Responsible AI (ORA). This team brings policy and governance expertise to the responsible AI ecosystem at Microsoft.

“We’re on a multi-year journey born out of the need to support innovation—and do it in a way that builds trust,” says Mike Jackson, head of AI Governance, Enablement, and Legal for the Office of Responsible AI. “Along the way, we’ve continued to iterate and evolve the program through a series of building blocks.”

ORA advances AI development, deployment, and secure and trustworthy innovation through governance, legal expertise, internal practice, public policy, and guidance on sensitive uses and emerging technology. The team focuses on empowering innovation while ensuring it falls within Microsoft’s governance, compliance, and policy guardrails.

ORA also partners closely with product and engineering teams as well as other trust domains like privacy, digital safety, security, and accessibility. The team created our Microsoft Responsible AI Standard, the cornerstone of our governance framework, and ensures internal AI initiatives align with it.

The Responsible AI Standard translates our six principles into actionable requirements for every AI project across Microsoft:

Fairness

AI systems should treat all people equitably. They should allocate opportunities, resources, and information in ways that are fair to the humans who use them.

Privacy and security

AI systems should be secure and respect privacy by design.

Reliability and safety

AI systems should perform reliably and safely, functioning well for people across different use conditions and contexts, including ones they weren’t originally intended for.

Inclusiveness

AI systems should empower and engage everyone, regardless of their background, striving to be inclusive of people of all abilities.

Transparency

AI systems should ensure people correctly understand their capabilities.

Accountability

People should be accountable for AI systems with oversight in place so humans can maintain accountability and remain in control.

ORA reports into the Microsoft Board of Directors and collaborates with stakeholders and teams across the company to operationalize these principles, implementing policies and practices that apply to AI applications. They determined that every AI initiative should undergo an impact assessment to ensure it aligns with the standard.

If ORA is our compass for responsible AI, our companywide Responsible AI Council has its hands on the steering wheel.

The council, led by Chief Technology Officer Kevin Scott and Vice Chair and President Brad Smith, was formed at the senior leadership level as a forum and source of representation across research, policy, and engineering. It provides leadership, strategic guidance, and executive support and sponsorship to advance strategic objectives around innovation and responsible AI.

A photo of Tripathi.

“ORA has established clear principles and a step-by-step assessment framework and tool. Our responsibility is to rigorously follow this process and ensure compliance across our products and initiatives.”

Naval Tripathi, principal engineering manager and co-lead, Microsoft Digital Responsible AI team

Under the council’s guidance, responsible AI CVPs, division leaders, and a network of responsible AI champions across the company operationalize the implementation of our Responsible AI Standard and compliance with our policies.

The structure of these teams is straightforward.

Every division has a designated CVP and division lead to steer the work and connect their team to the overarching Responsible AI Council. Within those divisions, each organization has a lead responsible AI champion or a set of co-leads to steer their team of champions. Those champions act as subject matter experts, reviewers for the impact assessment process, and points of contact for the teams developing AI initiatives.

Implementing AI governance within Microsoft IT

As members of the company’s IT organization, Microsoft Digital’s responsible AI division lead and champion team have a special role to play. They helped develop a critical internal workflow tool, which has now become a mandatory part of our responsible AI assessment process.

“The key is to ensure full alignment of responsible AI practices with ORA,” says Naval Tripathi, principal engineering manager and co-lead for Microsoft Digital’s Responsible AI Team. “ORA has established clear principles and a step-by-step assessment framework and tool. Our responsibility is to rigorously follow this process and ensure compliance across our products and initiatives.”

This tool logs every project, guides AI developers through initial impact assessments all the way to final reviews, and facilitates those workflows for champions.

A photo of Po.

“As organizations develop a diverse ecosystem of AI agents, often created by multiple engineering teams, it becomes essential to establish a standardized evaluation process. This ensures every agent adheres to enterprise-level standards before we deploy and distribute it to end users.”

Thomas Po, senior product manager, Microsoft Digital

By streamlining the process through a unified portal, the tool increases efficiency and minimizes errors that can arise from manual processes. It also encourages teams to make responsible AI part of the software development lifecycle (SDL) itself, not a hurdle or an afterthought.

“As organizations develop a diverse ecosystem of AI agents, often created by multiple engineering teams, it becomes essential to establish a standardized evaluation process,” says Thomas Po, a senior product manager working on Campus Services agents. “This ensures every agent adheres to enterprise-level standards before we deploy and distribute it to end users. That makes it more manageable in the long term, and having it all in one tool gives us more transparency.”

Our unified internal workflow looks like this:

  • Project initiation and system registration: During the design phase for an AI initiative, the engineering team accesses the portal and registers a new AI system. From there, they fill out fields with crucial information, including a title, description, the developer team’s division, whether the project will include internal or external resources, the relevant champion who should review their initiative, and other details. Within this initial form, different scenarios will trigger different review parameters and requirements, for example, when a team intends to publish a tool externally or engage with sensitive use cases.
  • Release assessment: After the system registration is complete, the team initiates the release assessment, a much more thorough review designed to ensure the AI-powered solution is ready to go live. At this point, the engineering team needs to provide detailed documentation. That includes the volume and kinds of data the system will use, potential harms and mitigations, and more. A release assessment includes experts in our Office of Responsible AI, Security, Privacy, and other teams, who review sensitive use cases or initiatives that include generative AI.

If the project clears all the requirements and reviews, it’s ready to go live. Crucially, we don’t think of these stages as a set of hurdles teams need to clear to complete their projects. Instead, the process guides engineering teams through the design elements they need to consider and provides opportunities for feedback from subject matter experts.

“The tool captures all the requirements from ORA and incorporates them into a developer-friendly workflow,” says Padmanabha Reddy Madhu, principal software engineer and responsible AI champion for Employee Productivity Engineering in Microsoft Digital. “It’s also a great way to pull AI champions into the design phase so we can support our colleagues’ work.”

With more than 80 AI projects currently underway across Microsoft Digital, logging and streamlining are essential. Teams are working on all kinds of ways to boost enterprise processes and employee experiences, like the following examples from Campus Services that users can access through our Employee Self-Service Agent:

  • A facilities agent helps employees take action when they discover an issue at one of our buildings, like a burnt-out light, a spill, or physical damage. The agent creates a ticket to alert a Facilities team so they can resolve it and allows the submitter to follow up on progress.
  • A campus event agent makes onsite gatherings like talks and Microsoft Garage build-a-thons more discoverable through simple queries. Using this agent, employees can more easily discover and plan around events that interest them, adding value to the in-person experience and incentivizing community.
  • A dining agent addresses the challenges of multiple on-campus restaurants featuring menu options that shift daily. Employees can use natural language queries like “Where can I get teriyaki today?” The agent does the rest. This kind of agent can be especially helpful for employees with allergies or dietary restrictions, providing a boost to accessibility for the on-campus dining experience.
A photo of Wu.

“AI is rapidly becoming a standard part of how we build and operate. As adoption accelerates, Responsible AI becomes imperative and enables teams to innovate at speed while maintaining safety and accountability at scale.”

Qingsu Wu, principal group product manager, Microsoft Digital

Our policies and practices have embedded a culture of responsibility and trust into our internal AI development processes. With that trust comes the confidence to experiment.

“AI is rapidly becoming a standard part of how we build and operate,” says Qingsu Wu, principal group product manager in Microsoft Digital. “As adoption accelerates, Responsible AI becomes imperative and enables teams to innovate at speed while maintaining safety and accountability at scale. By embedding Responsible AI into our engineering practices, teams have the clarity and confidence they need to manage risk proactively and deliver value without compromising safety or trust.”

Far from thinking of responsible AI assessments as an administrative or policy burden that creates additional work, teams now recognize their benefits. They look at the process as an extra set of eyes from a trusted partner. By minimizing legal and compliance risks through our Responsible AI Council’s expertise, our teams save time and stress, and we avoid problems like delayed releases or rollbacks.

A photo of Smith.

“What we’re doing is entirely novel in the tech world. Microsoft is really the lead learner here, and we have a passion for corporate citizenship that we’re embedding in our tools.”

Jamian Smith, principal product manager and co-lead, Microsoft Digital Responsible AI team, Microsoft Digital

Lessons learned: Embedding responsible AI into our development efforts

Throughout this process, we’ve learned lessons that will be helpful for other organizations just beginning their AI journeys:

  • We empowered early adopters and enthusiasts as responsible AI champions. They act as anchors and resources for developers who use AI, so we made sure they had the knowledge and training they needed to unlock downstream value.
  • Culture has been crucial to our success, especially our growth mindset and our focus on trust. Emphasizing these aspects of our company culture helped us embed responsible AI into core SDL processes and naturalize it on our engineering teams.
  • Processes are one thing, and tooling is another. If your responsible AI assessment workflow isn’t attuned to your needs, simply building a review portal tool won’t get you the rest of the way. First, we thought about the process we needed to put in place to solidify responsible AI practices and support our teams’ work. Then we built a tool that supports those workflows as easily and seamlessly as possible.
  • Accuracy is reliant on data, and data has a tendency to reflect the biases of the humans who organize it. It’s necessary to correct bias actively through introspection and testing.

“What we’re doing is entirely novel in the tech world,” says Jamian Smith, principal product manager and co-lead for Microsoft Digital’s Responsible AI team. “Microsoft is really the lead learner here, and we have a passion for corporate citizenship that we’re embedding in our tools.”

As your organization begins to experiment with its own AI projects, take these concrete steps to infuse responsibility into the solutions you create:

  1. Establish a strong foundation based on core principles and standards that align with your organizational culture. The Microsoft Responsible AI Standard is a great place to start because it reflects our experience and the expertise we’ve built as AI technology leaders and providers.
  2. Seek out the activators across your organization: people with a passion for AI, security, transparency, and other challenge areas, along with a willingness to learn and the ability to lead. Think about how to place them in both centralized and distributed positions.
  3. With the rapidly evolving regulatory climate around AI, it’s crucial to have a broad understanding of compliance and continue to follow its developments. Involve dedicated regulatory, compliance, and legal professionals in researching and monitoring global standards while communicating that information to your organization, particularly through training and updates that help teams adapt new regulations into their core processes.
  4. Create a process for responsible AI assessment. Consider ways to break it into stages that propel projects forward rather than hindering them. Enlist the right people to assess projects, and consider tooling that streamlines actions for both creators and assessors. Our AI Impact Assessment Guide can help you get started.
  5. Benefit from pioneers in the space, including our experts at Microsoft. Our journey has produced ready-to-use resources that can accelerate your progress. Examples include our Responsible AI Toolbox for GitHub, hands-on tools for building effective human-AI experiences, and our AI Impact Assessment Template.

“It’s not about how fast you can move, but how prepared you are. Responsible AI processes might seem like speed bumps, but ultimately they’re accelerators.”

Naval Tripathi, principal engineering manager and co-lead, Microsoft Digital Responsible AI Team

Building your capacity to create AI tools responsibly won’t happen without careful planning and strategy. As part of that process, embed responsible AI into your development workflows by emulating the practices we’ve pioneered at Microsoft.

“It’s not about how fast you can move, but how prepared you are,” Tripathi says. “Responsible AI processes might seem like speed bumps, but ultimately they’re accelerators.”

By prioritizing responsible AI, businesses of all kinds, all over the world, can ensure that the AI revolution is a truly human movement.

Key takeaways

These insights can help you as you begin your own journey through responsible AI:

  • Realize that this isn’t just a technical transition. It’s also a gradual evolution and an ongoing journey.
  • Work with people across your organization to establish goals and standards, because different disciplines bring different expertise and insights to the table. This will also align your responsible AI standards with your organizational values.
  • Start with the basics and build from there. Establish principles, create processes, and construct tooling around those structures.
  • A wide array of tooling is readily available in the world of AI. Seek out providers that model responsible values.
  • Lean on your existing experts across privacy, security, accountability, and compliance. Their skills will be crucial in this new technological landscape.
  • Conducting your own responsible AI groundwork is crucial, but you can also partner with Microsoft. We run on trust, and we’ve thought about these issues to pave the way for your success. Follow our lead, consider the best ways to adapt our lessons to your organization, and come to us with questions.

The post Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft appeared first on Inside Track Blog.

]]>
19289
Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI http://approjects.co.za/?big=insidetrack/blog/accelerating-transformation-how-were-reshaping-microsoft-with-continuous-improvement-and-ai/ Thu, 26 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20297 Technology companies are really people companies. In an age of rapidly advancing AI, losing sight of this reality leads to an overemphasis on new tools while neglecting opportunities for the transformational change that AI offers. Moving forward, the winners will be the companies that prioritize technological and operational excellence. Microsoft Digital, our company’s IT organization, […]

The post Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI appeared first on Inside Track Blog.

]]>
Technology companies are really people companies. In an age of rapidly advancing AI, losing sight of this reality leads to an overemphasis on new tools while neglecting opportunities for the transformational change that AI offers.

Moving forward, the winners will be the companies that prioritize technological and operational excellence. Microsoft Digital, our company’s IT organization, is seizing this moment by reinventing processes for agentic workflows powered by continuous improvement (CI).

We believe that AI-powered agents, Microsoft 365 Copilot, and human ambition are the key ingredients for unlocking opportunity across every industry.

A photo of Laves.

“Continuous improvement is a natural, formal extension of our culture that applies rigor, structure, and methodology to enacting a growth mindset through understanding waste and opportunities for optimization.”

David Laves, director of business programs, Microsoft Digital

By combining our AI capabilities with continuous improvement, we’re executing initiatives that increase our productivity and improve our performance. We’re forging a new path for how companies operate in the era of AI.

Welcome to the age of AI-empowered continuous improvement.

Our vision for continuous improvement, turbo-charged by AI

At Microsoft Digital, we’re embracing continuous improvement to unlock greater operational excellence and better employee experiences.

“One of the main tenets of our culture at Microsoft is a growth mindset, and that involves experimentation and curiosity,” says David Laves, director of business programs within Microsoft Digital. “Continuous improvement is a natural, formal extension of our culture that applies rigor, structure, and methodology to enacting a growth mindset through understanding waste and opportunities for optimization.”

Our capacity to drive process improvements has been crucial to our AI transformation as a company. We’ve adopted a “CI before AI” approach to ensure that we don’t end up automating inefficient processes. By engaging in activities that focus on continuous improvement, our teams can better identify which problems to address with AI and prioritize meeting customer needs.

“Continuous improvement is really about understanding your business, its needs, and where you can find value,” says Matt Hansen, a director of continuous improvement at Microsoft. “It gives us the language to scale our efforts out across everything we do.”

This process isn’t just another way to enable AI. In fact, AI is essential to enabling continuous improvement itself.

A photo of Campbell.

“When leaders stay actively engaged and partner through these Centers of Excellence, we can create alignment, accelerate decisions, and ensure both CI and AI help to deliver measurable business outcomes.”

Don Campbell, senior director, Microsoft Digital

Operationalizing continuous improvement and AI

Operationalizing continuous improvement and AI enablement is a leadership imperative at Microsoft, and one that doesn’t just happen organically. As an organization, we are deliberate about turning business strategy into measurable outcomes through clear sponsorship, disciplined prioritization, the right resourcing, and sustained investment in change management and employee skilling.

“The difference between strategy and real business impact is execution,” says Don Campbell, a senior director in Microsoft Digital. “That execution requires strong leadership sponsorship and clearly designed continuous improvement efforts and AI Centers of Excellence (CoEs), which translate business strategy into operational reality. When leaders stay actively engaged and partner through these CoEs, we can create alignment, accelerate decisions, and ensure both CI and AI help to deliver measurable business outcomes.”

To support leadership’s vision, we’ve put organizational resources in place to manage our continuous improvement investments, guide practices, and support teams. There’s an overarching continuous improvement CoE within Microsoft Digital, which works in close partnership with the AI CoEs, forming an integrated model which connects enterprise priorities with frontline execution.

Together, these CoEs establish shared standards, provide clarity on where to invest, and help us move faster with confidence, turning ambition into sustained business impact.

A photo of West.

“Continuous improvement is about process, but it’s also about people.”

Becky West, lead, Continuous Improvement Center of Excellence, Microsoft Digital

Continuous improvement and people

As we build out the organizational structures that underpin our investment in continuous improvement, we’re approaching the people side of change with intention.

Currently, we’re undertaking skilling efforts and communicating with every employee about how their role fits into core continuous improvement tools, including bowler cards, Gemba walks, Kaizen events, and monthly business reviews. We’re also demonstrating how “CI + AI” is a powerful combination.

The roadmap is there, the structure is in place, and we’re already seeing progress.

“Continuous improvement is about process, but it’s also about people,” says Becky West, lead for the Continuous Improvement CoE within Microsoft Digital. “A guiding hand like the Continuous Improvement CoE is how you make sure those two components align.”

Three Microsoft Digital continuous improvement initiatives

As we negotiate the early days of the company’s continuous improvement journey, Microsoft Digital is becoming a proving ground for the larger CI framework we want to deploy across the company. Our teams are spearheading projects to bring this framework to diverse functions like asset management, incident response (with a designated responsible individual), and third-party software licensing.

Enterprise IT asset management

Microsoft Digital’s Enterprise IT Asset Management team oversees the 1.6 million devices that power the company, from servers and IoT devices to labs, networks, and 800,000 employee endpoints. Safeguarding this vast landscape is critical to enterprise cybersecurity.

Three security pillars form the foundation of our security efforts: protect, detect, and respond. All of these depend on a complete, accurate device inventory.

Unified visibility enables proactive protection through enforced security controls, improves detection by spotting anomalies and misconfigurations, and accelerates responses by reducing investigation and remediation time. Without this foundation, security teams lack the precision to execute effectively.

To reach the goal of a unified inventory, the team initiated a continuous improvement initiative to build a consolidated source of truth for Microsoft Digital IT assets. Grounded in the principle of “progress over perfection,” the team initially narrowed its focus to Microsoft Lab Services (MLS) and IoT devices, with a vision to eventually expand to networks, employee devices, conference rooms, and printers. The ultimate goal is to move toward a truly comprehensive inventory.

This foundation will not only enhance security but also deliver enterprise-wide value through consistent policy enforcement, more resilient infrastructure, and comprehensive lifecycle management. By applying continuous improvement processes to help prioritize high-impact opportunities and using AI to accelerate outcomes, the program is enhancing Microsoft’s operational excellence and security posture.

“It’s better to do step A than wait until you’re ready to do steps A, B, C, and D,” says Aniruddha Das, a principal PM in Microsoft Digital.

As the team progressed from Gemba walks to Kaizen events under the guidance of the Continuous Improvement CoE, they dug deeper into areas of waste. Then they identified potential actions, breaking them down into “value-add,” “non-value-add-but-essential,” and “non-value-add.”

A photo of Ashwin Kaul

“For every action item, we were always asking ourselves how we could make these things better through AI. We’re looking for ways to expedite our core outcomes with minimal human involvement.”

Ashwin Kaul, senior product manager, Microsoft Digital

This exercise helped them prioritize their activities and land on a starting point: A device security index that would provide an overview of our hardware environment’s security posture. Essentially, it would represent a list of device security statuses.

The team identified distinct improvement areas for IoT and Microsoft Lab Services (MLS) devices. For IoT devices, they needed to build the inventory from the ground up. MLS already had a fairly complete inventory of devices, so the team set a goal to improve data quality. Although each of these challenges is different, they’re excellent opportunities for AI-empowered continuous improvement.

Now that the project is underway, the team plans to use an AI agent to automate device registration for IoT devices, which currently relies on manually uploaded spreadsheets. It’s a prime example how streamlining a process with continuous improvement enables AI to automate and accelerate our work.

On the MLS side, the team is creating an AI-driven normalization tool to automate the de-duplication and correction of inaccuracies in device data. The goal is to get from less than 50% data quality to 100%, dramatically improving our security posture through greater accuracy.

“For every action item, we’re always asking ourselves how we can make these things better through AI,” says Ashwin Kaul, a senior product manager within Microsoft Digital. “We’re looking for ways to expedite our core outcomes with minimal human involvement.”

Continuously improving the designated responsible individual experience

On the Digital Workspace team, designated responsible individuals (DRIs) are in charge of maintaining the health of our production systems. When technical emergencies arise, they’re the rapid-response point people who take the lead.

A photo of Ajeya Kumar

“We asked ourselves, ‘How can AI elevate the designated responsible individual (DRI) experience to the next level?’”

Ajeya Kumar, principal software engineer, Microsoft Digital

That process itself can be incredibly stressful, and time is of the essence. When every moment counts, efficiency is key. Meanwhile, a big part of a DRI’s work is just finding out what’s gone wrong so they can fix the incident.

But their job isn’t just about crisis management. When there are no active incidents, they work on engineering enhancements to improve the efficiency of production systems and clear backlog projects.

There’s also a handover process that takes place when one DRI finishes their rotation and another goes on-call. That involves a report about any incidents that have occurred, active issues, actions taken, key metrics, and other important information.

With these two priorities in mind, our Digital Workspace team initiated a continuous improvement process review. Their Gemba walk provided a crucial starting point.

“The planning stage is all about figuring out what the process is, what it should be, and what we can do to improve it,” says Ajeya Kumar, a principal software engineer on the Digital Workspace team within Microsoft Digital. “We asked ourselves, ‘How can AI elevate the designated responsible individual (DRI) experience to the next level?’”

Collectively, the team decided to tackle these challenges with a multifunctional AI agent they call the Smart DRI Agent. This agent’s primary role would be synthesizing and presenting information to its human counterparts to help them save time in context-heavy situations.

The AI elements that the team has planned can be broken out into the following capabilities:

  • Text summarization: Going through logs and identifying key insights.
  • Data correlation: Tracking and collating error logs.
  • Automation: Updating the status of issues, keeping abreast of communications, and providing point-in-time, daily, and weekly summaries of system health.
  • Identifying patterns: Building troubleshooting guides based on frequency patterns.

The Smart DRI Agent is already in its pilot phase and producing results. It conducts four main activities:

  • AI-generated summaries of DRI actions.
  • Proactive notifications with AI-generated insights.
  • Chat support to assist with all kinds of DRI queries.
  • AI-generated handover reports.

“The continuous improvement framework that enables these pieces is the key to unlocking value,” says Aizaz Mohammad, principal software engineering manager on the Digital Workspace team. “It may seem process-heavy, but once you work through it, you’ll see the value.”

That value is apparent in their results.

In the first 30 days of the Smart DRI Agent’s pilot, there were 301 incidents, and the agent provided insights on 101 of them. That led to an approximate 100 hours of time savings for DRIs and a 40% improvement in our key network performance metric.

Third-party software license audits

Within Microsoft Digital, the Tenant Integration and Management team is responsible for a range of services, including third-party software licensing. This space is all about managing liability from both a security operations and an auditing perspective.

A photo of Hovhannisyan.

“It takes a tremendous amount of data and traversals through multiple sources to get us to the actionable data we need. The goal for this project is to reduce that time to increase operational efficiencies.”

Anahit Hovhannisyan, principal group product manager, Microsoft Digital

Without the proper security insights, the company could find itself with risks associated with third-party software vulnerabilities. And without thorough auditing, we might experience license overuse and contractual issues that can lead to waste or expensive license reconciliations.

“It takes a tremendous amount of data and traversals through multiple sources to get us to the actionable data we need,” says Anahit Hovhannisyan, a principal group product manager within Microsoft Digital. “The goal for this project is to reduce that time to increase operational efficiencies.”

A photo of Kathren Korsky

“It’s tough to be honest about what isn’t working, because it ties into people’s personal value and worth, but it’s essential to the process.”

Kathren Korsky, team lead, Software Licensing, Microsoft Digital

The team decided to target the auditing process first. Currently, the software licensing team performs audits manually by looking at entitlements, contracts, purchase orders, and more while liaising with suppliers and our Compliance and Legal teams. That’s incredibly time-consuming.

During the software licensing team’s planning phase, they developed an ambitious goal of reducing the time to insights on third-party software license data from 154 days down to 15 minutes. During their continuous improvement Kaizen event, the team uncovered opportunities for AI-powered process improvements that eliminate waste.

“It required a lot of courage as we were identifying waste,” says Kathren Korsky, Software Licensing team lead within Microsoft Digital. “People are very invested. It’s tough to be honest about what isn’t working, because it ties into people’s personal value and worth, but it’s essential to the process.”

Now, they’re building and implementing solutions, including an AI and data platform that provides business intelligence with custom reporting abilities, an AI agent that provides audit support and ticket creation, and another that automatically generates audit reports. The team has been using Azure Foundry and Azure AI services to create their agents because these tools have the flexibility to switch between different models and fine-tune their parameters.

As these agents emerge, they’ll take the most tedious and error-prone aspects of the process out of human auditors’ hands, freeing them up to focus on solving problems, not endlessly searching for them.

Realizing continuous improvement at scale

These are just a small selection of the many continuous improvement initiatives underway within Microsoft Digital and the company as a whole.

“What continuous improvement gives us is the macro vision and the micro actions we can do to accomplish our goals.”

Kirkland Barret, senior principal PM manager, Microsoft Digital

At Microsoft, most of our continuous improvement initiatives are in their initial stages. As they progress through the measurement and adjustment phases, two benefits will emerge.

First, we’ll iterate and improve the value that each individual initiative provides. Second, we’ll continue to build our discipline and cultural maturity around a growth mindset we’re operationalizing through continuous improvement.

“What continuous improvement gives us is the macro vision and the micro actions we can do to accomplish our goals,” says Kirkland Barrett, senior principal PM manager for Employee Experience in Microsoft Digital. “It’s about knowing our objectives, identifying upstream root causes, and rippling them throughout a mechanism of progress.”

Key takeaways

These tips for implementing a continuous improvement framework come from our own experiences at Microsoft Digital:

  • Be inclusive: Have the right subject matter experts at the table from the start. Sponsors need to be present as well.
  • Cultivate maturity and transparency: Objective analysis about how things are going requires honesty.
  • Sponsorship matters: Make sure you have sponsorship at the highest levels. This is a cultural change, and leadership is the core of culture.
  • No half-measures: If you’re going to identify opportunities for continuous improvement, commit to having budget and resources in place.
  • Process, then technology: Focus on what you need to simplify processes first, then apply AI. This will keep you from automating waste and inefficiency into your operations.

The post Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI appeared first on Inside Track Blog.

]]>
20297
Mapping the Microsoft approach to accessibility in the world of AI http://approjects.co.za/?big=insidetrack/blog/mapping-the-microsoft-approach-to-accessibility-in-the-world-of-ai/ Thu, 19 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22756 More than 1 billion people worldwide have a disability, and 83 percent of people will experience a disability during their working age. As AI transforms how we build and experience technology, accessibility has to be built in from the start. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are […]

The post Mapping the Microsoft approach to accessibility in the world of AI appeared first on Inside Track Blog.

]]>
More than 1 billion people worldwide have a disability, and 83 percent of people will experience a disability during their working age.

As AI transforms how we build and experience technology, accessibility has to be built in from the start.

Designing with and for people with disabilities isn’t optional—it’s fundamental to building technology that works for everyone and to building trust at scale. And yet today, about96% of websites are still inaccessible.

At Microsoft, we’re committed to creating accessible products and services—designed with and for the disability community—that benefit everyone.

Our “shift left” approach to software production—which involves moving quality-assurance, testing, and accessibility checks to earlier in the development lifecycle—means that implementing assistive features and tools is a high priority for Microsoft, rather than a late-stage addition.

And with the rise in importance of AI tools and products, paying close attention to accessibility standards and building these key capabilities into game-changing tech like Microsoft 365 Copilot is a crucial part of our mission here in Microsoft Digital, the company’s IT organization.

A photo of Allen.

“After my accident, I became immediately reliant on accessible technology. Because I worked in tech, I could leverage accessibility features and assistive technologies to continue doing my job. It was literally a lifeline for me.”

Laurie Allen, accessibility technology evangelist, Microsoft

Evangelizing for accessibility

Laurie Allen is one person who knows first-hand the importance of accessibility in enterprise software. A little more than a decade ago, she experienced a spinal cord injury and became a quadriplegic.

Today, Allen works as an accessibility technology evangelist at Microsoft. Every day, she relies on assistive digital technologies to help her be successful in her role—which involves ensuring that our software products are accessible to everyone.

“After my accident, I became immediately reliant on accessible technology,” Allen says. “Because I worked in tech, I could leverage accessibility features and assistive technologies to continue doing my job. It was literally a lifeline for me during that transitionary phase, because my job was the one thing about my life that didn’t dramatically change as a result of the accident.”

The following graphic shows how widespread disability is around the globe: 

Shifting left for inclusivity

At Microsoft, our accessibility strategy includes such disability categories as mobility, vision, hearing, cognition, and learning—because accessibility empowers everyone.

A photo of Garg.

“We view accessibility as a quality of our software, not simply a feature. Like with security and privacy, we prioritize accessibility to ensure that people can effectively perceive and operate our products and services, delivering an inclusive experience for everyone.”

Ankur Garg, accessibility program manager, Microsoft Digital

We begin with the concept of “shift left,” which in this context means incorporating accessibility principles from the project’s outset, instead of waiting until a product is already built.

This strategy mirrors our approach in other key trust domains, such as security and privacy.

“We view accessibility as a quality of our software, not simply a feature,” says Ankur Garg, an accessibility program manager in Microsoft Digital. “Like with security and privacy, we prioritize accessibility to ensure that people can effectively perceive and operate our products and services, delivering an inclusive experience for everyone.”

Here in Microsoft Digital, that manifests as treating accessibility as a core requirement validated through rigorous internal testing of AI agents and embedding standards and inclusive design early in every tool’s development life cycle. We also use internal AI tools to streamline guidance and testing before expanding those practices across the company.  

Accessibility challenges in the age of AI

Technology is moving fast, especially with the advent of AI-powered tools. It’s easier than ever for companies and individuals to quickly generate and publish an app, website, or other digital product.

That means it’s also easier than ever before to create inaccessible software. It’s important to remember that much of the data that generative AI models have been trained on includes websites and apps that were built without considering accessibility guidelines.

A photo of Hirt.

“We want people with disabilities to be represented and see themselves in the technology we’re producing. We work with our AI models to make sure they have disability data in their training sets, so that the final product will reflect these values.”

Alli Hirt, director of accessibility engineering, Microsoft

As a result, we’ve found that many AI code-generation tools and models produce code that by default fail to meet Microsoft’s high standards for accessibility.

“We want people with disabilities to be represented and see themselves in the technology we’re producing,” says Alli Hirt, a director of accessibility engineering at Microsoft. “We work with our AI models to make sure they have disability data in their training sets, so that the final product will reflect these values.”

When we’re developing AI-driven products like Microsoft 365 Copilot, the tool must have comprehensive knowledge of different disabilities and be able to give appropriate, contextual help.

“Let’s say I tell Copilot, ‘I have a mobility disability; what software tools can I use?’” Allen says. “Copilot must recognize what a mobility disability is and identify which tools will support me. That’s the data representation we need in our AI models.”

Allen noted that sensitivity and bias are also big factors when creating these kinds of tools.

“Copilot should not respond with, ‘I’m sorry you have a disability,’” she says. “That’s the type of bias we’re working to train out of the models.”

Accessibility as a core commitment

When Satya Nadella became Microsoft CEO in 2014, he redirected the core mission of the company. The new vision was simple: To empower every person and every organization on the planet to achieve more. And accessibility is a core part of that mission.

“At Microsoft, accessibility is in our DNA. It’s who we are as a company.”

Laurie Allen, accessibility technology evangelist, Microsoft

Meeting global accessibility standards is our starting point. For example, the hub-and-spoke business model of the Accessibility Team helps ensure that accessibility is everyone’s responsibility.

The Microsoft Corporate, External, and Legal Affairs (CELA) group oversees accessibility across the company, helping products align with internationally recognized accessibility standards, such as Web Content Accessibility Guidelines (WCAG) and EN 301 549. These standards ensure that digital content, websites, and apps produced today are designed with accessibility in mind.

Understanding how products and services align to key accessibility standards and requirements is an important step in providing inclusive and accessible experiences.

“An organization’s accessibility program succeeds when it’s a priority at every level of the organization, starting with senior leadership,” Allen says. “At Microsoft, accessibility is in our DNA. It’s who we are as a company.”

Presenting content in a multimodal way

Here in Microsoft Digital, we embrace software products that provide our employees with a multimodal approach in presenting content. This means using more than one sense at the same time, like seeing, listening, reading, and speaking. This makes our products accessible to a diverse array of users, including people who learn and work in different ways. It lets our employees customize the way that works best for them.

“Seeing a visually impaired colleague demonstrate how he works—listening to a wiki being read at a speed that I could never follow—showed me exactly why accessibility is needed. It’s not just about being inclusive or compassionate; it’s a requirement for people to do their jobs.”

Eman Shaheen, principal PM lead, Microsoft Digital

For example, someone may not have a diagnosed disability, but they might be a better auditory learner than a visual learner.

This reflects what Eman Shaheen, a principal PM lead in Microsoft Digital, learned from a team member when observing how he used assistive technologies.

“Seeing a visually impaired colleague demonstrate how he works—listening to a wiki being read at a speed I couldn’t even follow—showed exactly why accessibility is needed,” Shaheen says. “It’s not just about being inclusive or compassionate; it’s a requirement for people to do their jobs.”

Here are some examples of multimodal accessibility capabilities offered by Microsoft 365 Copilot that are designed to support diverse user requirements:

Vision

  • Works with screen readers
  • Generates alt text for images
  • Suggests accessible layouts, textual contrast, and consistent structure in documents and slides

Hearing

  • Provides real-time meeting Q&A
  • Produces meeting recaps across multiple languages
  • Summarizes lengthy or fast-moving chats to aid comprehension

Cognitive and neurodivergent (ADHD, dyslexia, autism, executive function)

  • Simplifies complex language
  • Supplies task breakdowns and next-steps guidance
  • Offers tone assistance to help with understanding communication nuances

Mobility

  • Provides voice-driven productivity tools, such as speech to text creation
  • Reduces fine‑motor effort by automating lists, tables, and drafts
  • Supports meeting recordings to help compile notes and action items

Speech and communication

  • Drafts and rewrites content for users needing expressive support
  • Refines tone for clarity and empathy in written communication

Learning

  • Summarizes long content to reduce reading burden
  • Organizes notes into structured content

Mental health and fatigue

  • Assists with communication when cognitive energy is low
  • Provides adaptive communication assistance to help users express themselves confidently

How we demonstrate our accessibility vision

Here at Microsoft, we developed a strategic partnership with ServiceNow over the last five years. The two companies work together to accelerate digital transformation for our enterprise and government customers.

Through this partnership, we use the ServiceNow platform for internal helpdesk and ServiceDesk process automation, IT asset management, and integrated risk management.

A photo of Mazhar.

“The biggest shift happened once ServiceNow started feeling the same operational pain we felt. That’s when they began fixing accessibility issues proactively, which changed everything.”

Sherif Mazhar, principal product manager, Microsoft Digital

As part of this process, we uncovered 1,800 accessibility bugs (including 1,200 that were rated as high severity) in the platform—in our first assessment. By contrast, our most recent review found just 24 accessibility-related issues.

“The biggest shift happened once ServiceNow started feeling the same operational pain we felt,” says Sherif Mazhar, a principal product manager in Microsoft Digital, who oversees the company’s relationship with ServiceNow. “That’s when they began fixing accessibility issues proactively, which changed everything.”

The next major step for us is ensuring our ServiceNow platform updates aligns to WCAG 2.2 accessibility standards which will require reworking older versions of our products. However, doing this work helps us maintain momentum toward a world of more inclusive enterprise software in all lines of business and for all Microsoft customers.

What’s next in accessibility

Digital accessibility work is never done.

As new software and hardware are introduced, user needs and accessibility standards change and grow. At Microsoft, we are committed to making accessibility easier for everyone.

“Right now, we’re making sure every AI agent across Microsoft is tested with assistive technologies—like screen readers and keyboard navigation—to guarantee that the outputs are accessible and compliant,” Garg says.

This “shift left” mentality at Microsoft is ultimately about putting people first. It means that no one should have to wait for a late fix to be able to do their work, or simply to belong.

By embedding accessibility standards into product planning, instead of tacking it on as an afterthought just before (or even after) product launch, we’re helping ensure that these digital experiences will include everyone from day one.

“We may compete on products, especially in AI, but accessibility is a shared mission,” Allen says. “When the industry collaborates on inclusive technology, everyone wins.”

Key takeaways

Here are some tips to keep in mind as you consider your own accessibility strategy in a world of increasingly AI-driven technology:

  • Start with leadership. Championing accessibility from the C-suite signals that this is a top organizational priority.
  • Raise awareness with training. Set up employee learning opportunities regarding accessibility in AI tools and encourage everyone to take part.
  • Design with inclusivity in mind from day one (“shift left”). Incorporate accessibility from the beginning of the software creation process to make sure it isn’t lost in the shuffle of trying to ship a product on time.
  • Think inclusively. Run usability tests with people with lived experience
  • Treat accessibility as an ongoing practice. Digital accessibility work is never finished; document strategies and share your team’s learnings to keep improving iteratively as an organization.

The post Mapping the Microsoft approach to accessibility in the world of AI appeared first on Inside Track Blog.

]]>
22756
Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success http://approjects.co.za/?big=insidetrack/blog/deploying-the-employee-self-service-agent-our-blueprint-for-enterprise-scale-success/ Thu, 12 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22492 The case for AI in employee assistance The advent of generative AI tools and agents has been a game changer for the modern workplace at Microsoft. And one of the foremost examples of how we’re reaping the benefits of this agentic revolution is our deployment of our new Employee Self-Service Agent across the company. Thanks […]

The post Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success appeared first on Inside Track Blog.

]]>

The case for AI in employee assistance

The advent of generative AI tools and agents has been a game changer for the modern workplace at Microsoft. And one of the foremost examples of how we’re reaping the benefits of this agentic revolution is our deployment of our new Employee Self-Service Agent across the company.

Thanks to the power of AI, agents, and Microsoft 365 Copilot, our employees—and workers everywhere—are discovering new ways to be more productive at their jobs every day. Recent research shows that knowledge workers are increasingly seeing big gains from using AI tools for work tasks. According to our Microsoft Work Trend Index:

As an AI-first Frontier Firm, Microsoft is at the leading edge of a transformation that’s bringing this technology into all aspects of our workplace operations. With tools like Microsoft 365 Copilot providing “intelligence on tap,” we’re forging a human-led, AI-operated work culture that enables our employees to accomplish more than ever before.

Bringing AI to employee assistance

As part of this move to embed AI across our enterprise, it was a natural step for us to apply this burgeoning technology to a common pain point for us and many workplaces today—employee assistance.

Workers in organizations large and small face many common issues in their day-to-day jobs. Whether it’s a problem with their device, a question about their benefits, or a facilities request, our typical employee was often forced to navigate a bewildering array of tools, apps, and systems in order to get help with each specific task.

This confusion is reflected in research showing that most workers are dissatisfied with existing employee-service solutions.

76% of employees find it difficult to quickly access company resources.
58% of employees struggle to locate regularly needed tools and services.

Our studies show that most employees have trouble finding the appropriate tools and resources they need to address their workplace-related questions.

Realizing that this was an ideal opportunity for AI, we set out to develop a state-of-the-art agentic solution. At Microsoft Digital, the company’s IT organization, we partnered with our product groups to develop and deploy the Employee Self-Service Agent, a “single pane of glass” that employees can turn to any time they need help. The product is now broadly available in general release.

A photo of D’Hers.

“With this employee self-service solution, we’re shaping a new era in worker support. With AI, every interaction is intuitive, every resource is within reach, and help feels seamless—creating an experience that empowers our people and accelerates business outcomes.”

Because Copilot is our “UI for AI,” the Employee Self-Service Agent is delivered as an agent in Microsoft 365 Copilot. If your employees have access to Copilot, you can deploy the agent at your company at no extra cost. If your employees don’t have a Copilot license, they can access it via Copilot Chat if it’s enabled by your IT administrator.

For the initial development and launch of our Employee Self-Service Agent, we decided to provide agentic help in three categories: Human resources, IT support, and campus services (real estate and facilities). Every organization will have to make its own determination for which functions to include in their implementation. Note that the agent is inherently flexible and expandable; we plan to add additional capabilities, such as finance and legal, in the future.

We learned many lessons in the almost year-long process of developing and implementing the Employee Self-Service Agent across our organization worldwide. The goal of this guide is to pass on what we learned—including how we used it to provide value to our employees and vendors—to help you prepare for, implement, and drive adoption of your own version of the agent.  

“With this employee self-service solution, we’re shaping a new era in worker support,” says Nathalie D’Hers, corporate vice president of Microsoft Employee Experience. “With AI, every interaction is intuitive, every resource is within reach, and help feels seamless—creating an experience that empowers our people and accelerates business outcomes.”

Before you start: Developing your plan

As you embark on your Employee Self-Service Agent journey, make sure to establish a clear and structured plan. This was a critical step for us in our deployment, and we can say with confidence that it will help you avoid surprises and increase your chances of a successful outcome.

Based on our experience here at Microsoft, the below is a high-level outline of the steps you should consider as you prepare for deploying your agent.

1. Define prerequisites
Start by making sure that all foundational elements for the agent are in place.

  • Assign licenses to your employees who will interact with the agent. They will need Microsoft 365 Copilot or Copilot Chat.
  • Verify readiness by configuring your Power Platform environments, applying Data Loss Prevention (DLP) policies, and setting up isolation (limited and controlled deployment with guardrails in place) where needed.
  • Ensure connectivity with critical systems by confirming that you have appropriate APIs and connectors available and functioning for the essential workplace systems that your organization uses (e.g., Workday, SAP SuccessFactors, and ServiceNow).

2. Identify your core team and responsibilities
Successful implementation of the Employee Self-Service Agent requires collaboration across multiple roles and departments in your organization.

  • Business owners from the areas your agent will cover—such as human resources and IT support—can help you define requirements, priorities, success criteria, and telemetry needs.
  • Platform administrators, particularly for Power Platform and tenant/identity teams, can manage your technical configuration.
  • Content owners and editors are needed to identify the knowledge sources to surface in the agent, curate new knowledge sources, and maintain the data underpinning these sources on an ongoing basis.
  • Subject matter experts can provide important “golden” prompt and user scenarios that the agent should prioritize and answer accurately.
  • Compliance, privacy, and security leaders and their teams are needed to address risk considerations.
  • Support professionals can help build a structure for live agent escalation and ticketing operations (in situations where the agent is unable to provide a solution).
  • Focus groups of end users assist with validating requirements and scenarios, as well as help with testing the agent.

3. Establish a clear timeline
We found that creating a schedule for the creation, implementation, and adoption of the agent is crucial. This phased approach will help you maintain momentum and accountability over the duration of the project.

For example, here’s a rough implementation timeline that you might use to gauge your progress:

Gantt chart showing 15-week timeline with assessment, deployment, pilot launch, and rollout phases.

4. Articulate your vision

Communicate your rollout plan to your team, including timelines and phases, and adjust it based on feedback. Establish clear goals and meaningful success metrics to guide you and make sure your efforts are in alignment with your company objectives. (Note: You may want to consider key upcoming projects or events in your organization and link the agent roadmap to them. This will help you meet your project’s success criteria faster and encourage quicker agent adoption.)

5. Define your governance

This phase will allow you to define policies and standards and conduct a thorough content audit to ensure accuracy, relevance, security, and sustainability.

6. Implement your agent

This phase involves configuration and integration, followed by testing.

7. Roll out the agent while driving adoption and measurement

We advise deploying the Employee Self-Service Agent using a phased, or ringed, approach. We started with a small group of employees, then gradually rolled it out to larger and larger groups  before finally releasing it to our entire organization.

We encouraged adoption with internal targeted communications and promotional efforts. Careful measurement enabled us to track impact and optimize agent performance. This type of concerted change management allowed us to share the latest product developments with our employees and to keep them excited and engaged with the tool.

By investing sufficient time and effort in the planning phase of your deployment, you’ll create a strong foundation for a secure, scalable, and successful self-service agent experience.

Chapter 1: Governance means getting your data right

When a Microsoft employee enters a query into an AI chat tool like Microsoft 365 Copilot, they know that they may not receive an individualized response that is directly specific to their situation. They are aware that they might need to verify the answer they receive with further research and additional sources.

But when it comes to our company-endorsed self-service agent, the stakes are different. Our employees expect to receive accurate and personally relevant responses when they ask for help. This is particularly true for queries related to important personal details, like HR-related questions about leave policies or benefits.

A photo of Ajmera.

“People expect personally tailored and highly accurate answers, especially for HR moments that really matter. We designed the Employee Self‑Service Agent with that expectation in mind, pairing trusted data and deep personalization with strong governance controls so that privacy, security, and trust are built into every interaction.”

Although the Employee Self-Service Agent comes pretrained with basic HR and IT support data, we found that the quality of the responses that our employees receive is directly connected to the accuracy, currency, and depth of the information we provide to the tool. You’ll want to spend the necessary time and effort to make sure that your data governance process is well thought-out and thorough, so that your employees experience the best possible results.

“Employee self‑service has a higher bar than generic AI tools,” says Prerna Ajmera, general manager of HR strategy and innovation. “People expect personally tailored and highly accurate answers, especially for HR moments that really matter. We designed the Employee Self‑Service Agent with that expectation in mind, pairing trusted data and deep personalization with strong governance controls so that privacy, security, and trust are built into every interaction.”

Major considerations for governance

We learned that before you configure your agent, you need to establish guardrails that protect your data’s integrity and that build your employees’ trust. These considerations will form the backbone of your governance framework:

  • Managing requirements: Define what the agent must deliver and align your stakeholders on clear, prioritized goals and objectives.
  • Determining and managing resources: Ensure you have the right people, systems, and funding in place to support your full product lifecycle.
  • Data security: Protect your sensitive employee information with strong controls, compliant storage, and least‑privilege access.
  • User access: Establish who can use, administer, and update your agent, with appropriate permissions and guardrails.
  • Change tracking: Monitor your updates to content, configurations, and workflows so your agent always reflects your current policies.
  • Reviewing: Regularly evaluate your content’s accuracy, the agent’s performance, and your organizational fitness to help you keep your employees’ experience with the agent trustworthy.
  • Auditing: Maintain traceability for compliance, incident investigation, and quality assurance across all of your data flows.
  • Deployment control: Manage where, when, and how you roll out new versions of the agent to reduce disruption and ensure consistency.
  • Rollback: Prepare a fast, safe path to reverting your changes if something breaks.

We found that addressing these considerations early in the process creates a governance structure that is proactive rather than reactive, increasing the quality of responses and setting your organization up for success.

Architecture essentials

Understanding the architecture of our agent helped our governance teams make informed decisions about our configuration and integration. To do that, they needed to review and understand its key architectural components. You’ll need to do the same.

Here’s a list of the different architecture components that our team assessed, to help you get started on your own process:   

  • Topics: Structured intents (e.g., “view paystub”) that align to employee questions and drive consistent answers.
  • Domain packages: Pre-curated bundles for different agent segments (like HR and IT support) that provide reusable patterns, prompts, and integrations.
  • Knowledge sources: Documents, intranet pages, FAQs, and databases that ground responses in authoritative content.
  • Connectors: Secure integrations to systems of record (like Workday or SAP SuccessFactors) can help enable read/write operations. (Because the Employee Self-Service Agent was built with Copilot Studio, it has access to more than 1,400 different connectors.)
  • Instructions: Governance-approved rules and prompts that shape tone, guardrails, and escalation behavior.

Assessing and preparing your content

A key early governance step is to audit all relevant content in your knowledge bases. This process should include assessing, updating, and, if necessary, restructuring this information before it is ingested by the agent.

An important caveat here is that the agent’s ability to understand which policies and procedures apply to which employee relies on your content having consistent metadata, permissions, and content structure. We found that before feeding your data into the agent, you need to:

  • Inventory existing content: Your content will incorporate many different types, such as SharePoint pages, Microsoft Teams posts, PDFs, intranet articles, and knowledge-base documents. The goal of the inventory process is to identify content that is complete rather than outdated, duplicative, or siloed; if there are issues with the content, they should be addressed before loading into the agent.
  • Assign knowledge owners: The owners should be SMEs who can help validate, tag, and maintain the content going forward. Part of this process is training up knowledge owners to be able to prepare and maintain content in ways that make it easily consumable by both agents and people.
  • Structure content for discoverability: All your content needs to have accurate metadata, well-defined topic pages, and consistent naming so that the agent can surface the right information at the right time.

We found that completing a thorough content audit helps us ensure that the Employee Self-Service Agent isn’t just chatting—it’s delivering trusted, up-to-date answers that save your workers time and effort as they go about their day.

Be aware of tone and conversational flow

Providing vetted and well-structured data to the agent is important, but it’s not the entire battle. You’ll also need to make sure your agent is given clear guidance on conversational tone and instructions on what to do in specific scenarios.

Make sure you incorporate:

  • Global instructions: Define the agent’s voice, behavior, and escalation rules to ensure consistency and trust. 
  • Topic-level triggers: Map natural language phrases to specific workflows (such as “reset password” or “check PTO”) so the agent routes these common queries correctly.
  • Advanced knowledge rules: Prioritize which data sources to use in ambiguous scenarios, and define when the agent should ask clarifying questions.

Taking these steps gave our agent a better chance of being accurate, helpful, and aligned with our organization’s specific preferences.

Addressing common scenarios with “golden” content

Another vital aspect of your content audit is identifying the most frequently accessed information in each topic area.

A good example comes from the preparation of our IT support content for ingestion by the Employee Self-Service Agent. One of the focuses of this effort was on so-called “golden prompts:” the 20 or so topics that generate up to 80 percent of our employee queries (a version of the famous “80/20 rule”).

Our golden prompts are a curated set of scenarios that:

  • Represent our critical user workflows and edge cases
  • Possess clear, expected responses (golden responses)
  • Cover core functionality that must never break

We made sure that the agent was providing high-quality responses for these common scenarios—we recommend you do the same.

Including “zero prompt” content

Another important aspect of your content process should be to develop “zero prompts.” These are preconfigured prompts in the agent that the user can simply click on to get an answer for a common issue or request.

For example, if one of your employees wants to understand how to set up a VPN, they simply click on the zero prompt provided for that topic. The tool then gives them complete instructions on how to set one up.

During our deployment of the agent, one case where we prepopulated the tool with content for a specific, high-demand scenario came when Microsoft made a major announcement regarding employees returning to the office. We knew this policy change would generate a lot of questions from our employees.

In preparation for this, we asked Microsoft 365 Copilot to create a single document that pulled in all the “return to office” material found in its verified HR content database. We then made this document available to the agent. Just by taking that simple step, we saw our user satisfaction ratings in the tool jump from 85 percent to 98 percent for that issue!

In your own deployment, think about what issues and topics generate the most questions from your employees. You can then prepare specific content to address these scenarios, which will increase your chances of success with the agent.

Data security and compliance

Data security was a high priority when we developed our agent, especially because it must necessarily access sensitive HR information on a regular basis. During product development, we made sure that the agent adhered to enterprise-grade security standards, including identity federation, least-privilege access, and encrypted storage.

Because the agent is built on Copilot Studio, it supports robust data-loss prevention features. The agent also complies with regulatory frameworks like General Data Protection Regulation through built-in auditing and data-retention policies.

One of the big advantages that an AI agent has over a static website or similar data source is the ability to personalize responses for each user. At the same time, we had to make sure that the agent had guardrails in place to avoid overexposing sensitive information. This included detailed disclaimers to help call out these kinds of responses and flag them for more careful handling.

Our agent complies fully with our accessibility standards as well. Like all Microsoft products and services, the tool underwent a rigorous review to ensure it was fully accessible for all users.

Responsible AI

Whenever a new AI application is launched, there may be concerns raised about potential challenges regarding bias, safety, and transparency. That’s why the Employee Self-Service Agent follows the Microsoft Responsible AI principles by default.

When you enable the sensitivity topic in your agent, it screens all responses for harassment, abuse, discrimination, unethical behavior, and other sensitive areas. We tested the agent thoroughly for objectionable responses before it was launched to a broad internal audience at Microsoft.

In addition, the agent includes an emotional intelligence (EQ) option. This feature is designed to make responses more empathetic, context-aware, and relevant for diverse user audiences. It analyzes the conversation’s context and tailors the agent’s replies to ensure that users feel understood and valued throughout their session (which could be particularly relevant for any conversations related to sensitive HR topics, such as family leave). The EQ option is customizable and can be turned off by your product admins.

Key takeaways

The following are important considerations for data governance when you deploy your Employee Self-Service Agent:

  • Employee expectations regarding accuracy and relevance are high for employee self-service tools, which makes data governance a key aspect of your deployment.
  • Consider which data repositories are best to incorporate into your agent, and make sure they are up-to-date and well-structured. This process requires a thorough content audit.
  • Pay special attention to the so-called “golden prompts” that make up a large percentage of expected queries. The agent’s answers to these questions should be top-notch.
  • Restructuring content can improve response quality. When we anticipated huge interest in a particular topic, such as workplace policy changes, we restructured our content on that subject and saw a significant jump in user satisfaction.
  • Build your agent to meet or exceed high standards for data security, privacy, and Responsible AI. These are vital concerns for any product that has access to sensitive personal information.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 2: Implementation with intention

Deploying a powerful and versatile tool like the Employee Self-Service Agent is no simple task. It requires guidance and buy-in from top leaders at the company, as well as detailed planning and execution across disparate parts of your organization. Here, we identify some of the key steps that we took here at Microsoft that can help guide you when launching your own self-service agent.

Determine category parameters

One of the first major decisions around implementing the agent is deciding which business function—we call them agent starters—to choose for your initial implementation.

We recommend starting with HR support or IT help (we started with HR). Both agent starters can be deployed into a single Employee Self-Service Agent experience, but they must be deployed one at a time. 

So you know, we’ve built the Employee Self-Service Agent to be connectable with other first- or third-party Copilot agents, enabling a seamless handoff to these agents without having to navigate to other tools or interfaces.

Understanding your deployment steps

There were four essential stages involved in the deployment of our agent, each with multiple steps. Here’s a quick rundown that you can use at your company:

  1. Preparation for deployment
    • Establish roles: Define who will manage, configure, and support the tool, assigning responsibilities to ensure accountability during deployment.
    • Set up your environment: Prepare the necessary hardware, operating system, and network configurations so the agent can run smoothly.
    • Set up third-party system integration: Ensure your infrastructure can securely connect and exchange data with external systems that the agent will need to integrate with.
  2. Installation
    • Install the agent: Deploy the core Employee Self-Service Agent software on the designated servers or endpoints.
    • Install accelerator packages: Add any desired connectors that enable the agent to communicate with commonly used systems for HR, payroll, IT support, etc.
  3. Customization
    • Configure the core agent: Adjust default settings to align with your organization’s policies and workflows.
    • Identify knowledge sources: Specify where the agent will pull information from, such as internal knowledge bases or FAQs.
    • Provide common questions and responses: Add employee FAQs to improve the agent’s ability to respond quickly and accurately.
    • Identify sensitive queries: Flag questions and responses that involve confidential or regulated information to ensure they’ll be handled securely.
  4. Publication
    • Approve the agent: Complete internal reviews and compliance checks to confirm the agent meets your organizational standards before full rollout.
    • Publish the agent: Make the configured agent available to your employees in your production environment.

Customization

The Employee Self-Service Agent operates as a custom agent within Copilot Studio, using our AI infrastructure via the Power Platform. The agent is constructed on a modular architecture that allows you to integrate it with your own enterprise data sources using APIs, prebuilt and custom connectors, and secure authentication mechanisms.

To streamline this integration process, we provide a library of prebuilt and custom connectors through both Copilot Studio and Power Platform. Preconfigured scenarios include connecting to major enterprise service providers such as Workday, SAP SuccessFactors, and ServiceNow. (View the full list of connectors offered by Copilot Studio.)

These connectors facilitate data exchange with the following systems and other agents in this ecosystem:

  • HR information systems
  • IT systems management
  • Identity management
  • Knowledge base platforms

We found that third-party integrations require setup effort and technical expertise across stakeholders in your tenant. Be sure to get buy-in and involve all relevant departments that will be impacted.

Rollout: A phased approach

As previously noted, we started our agent with HR content and then added IT support (we later expanded to include campus services help as well). We rolled the agent out to different groups of employees and geographic regions around the world over the course of months, adding new knowledge sources to the different categories at each step along the way. This gave us an opportunity to gather user data and refine performance of the tool as we went.

Graphic shows the phased rollout of the Employee Self-Service Agent to Microsoft employees in different regions of our global workforce.
We executed a phased rollout of the Employee Self-Service Agent across different regions and countries at Microsoft. As we expanded the audience for the tool, we also added more categories, knowledge sources, and capabilities.

Adding campus support services required us to handle queries and tasks related to dining, transportation, facilities, and similar subjects. This was a challenging addition, because the facilities and real estate space—unlike the HR and IT support areas—doesn’t have many large service providers, which are easier to provide prebuilt connectors for.

One area that did lend itself to prebuilt connectors, however, was facilities ticketing.

Because many of our campus facilities vendors use Microsoft Dynamics 365, we were able to create an out-of-the-box connector in the agent for their ticketing process. You can take advantage of these kinds of preconfigured tools in your deployment.  

Key takeaways

Here are some things to remember when implementing the Employee Self-Service Agent at your organization:

  • Decide which starter agent you will deploy first. We recommend starting with a single agent covering one area (vertical), such as HR or IT support, and then expanding from there.
  • Consider a phased rollout to allow time to refine responses and ramp up the number of topic areas and knowledge sources installed in your agent.
  • Use the prebuilt connectors to make it easier to integrate the agent with your existing systems.We developed customized connectors for major HR and IT service providers and a Microsoft 365 Dynamics connector to integrate with our many facilities vendors around the world.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 3: Driving adoption by breaking old habits

Once upon a time, when our employees needed help with a technical issue or an HR question, they literally picked up the phone and called the relevant internal phone number. That quickly evolved into an email-centered system, where employee questions were sent to a centralized inbox that would then generate a service request. Still later, chat-based help was introduced.

Using AI to handle employee questions and service requests is a natural step in this evolution, as large-language models were built to parse vast data repositories and return the right information (often with the help of multi-turn queries and responses). And by encouraging self-service, an AI agent can help meet employee needs faster while saving the organization’s staffing resources for other needs.

But getting employees to change their habits and use a tool like the Employee Self-Service Agent wasn’t going to be as easy as just flipping a switch. Here’s how we handled this important change management task at Microsoft.

Adoption across verticals

A key principle that we learned during the adoption process was that 80% of our change management activities for the agent are applicable to all our verticals (whether it be HR, IT support, campus facilities, or another category). We didn’t need to reinvent the wheel each time we added to the topics that the agent covered.

This allowed us to create a change management “playbook” that we could use each time we expanded to a new category. So, while roughly 20% of the strategies we used were specific to that vertical, the vast majority were the same, which saved time as we moved through onboarding the different categories.

Leadership is key

To get our employees to change the way they ask for help, we found it essential to get the support of our key leaders, something we refer to as “sponsorship.”

We found that good sponsorship doesn’t just come from your central product, communications, or marketing groups. It is equally vital to invest in relationships with local leadership in different regions as you roll out the agent (especially in multinational companies like ours).

Local leaders understand the various regional intricacies—including language, functionality, and the rhythm of the business—that can help inspire their segments of the workforce to adopt a new tool, and then evangelize it to others in turn. Working closely with these kinds of sponsors will help you pull off a successful adoption campaign.

If you have works councils, be sure to seek out your representatives and solicit their feedback on your agent experience early on. You can help them understand how the agent was developed and trained, then address any concerns they raise.

We’ve found that once our works councils are made aware of the careful processes we go through to protect user privacy, and to ensure compliance with our Responsible AI standards, they become enthusiastic supporters and can help promote agent adoption. (Read more about our experience with our works councils and the Microsoft 365 Copilot rollout.)

Defining your messaging

Work with your internal communications team to come up with a well-planned messaging framework for your agent rollout. Based on our experience, it’s likely you’ll need to communicate across a wide variety of teams and organizations like HR, IT, facilities, finance, and so on.

It’s important to be clear about how you’re positioning the product for your employees. This will allow you to develop both overall messaging for general use, but also content tailored to specific teams or employee roles. The more sophisticated your messaging, the more likely it is to be effective in encouraging user adoption of the agent in their regular workflow.

Listening to feedback

As Customer Zero for the company, our employees are our best testers and sources of feedback during our product development process. The Employee Self-Service Agent was no different, and we continue to gather crucial feedback and user data throughout the internal adoption process.

Because the agent is a tool centered on helping your workers resolve challenges and get quick answers to questions, you’ll want to set up your own systems for capturing their feedback and make sure the agent is meeting a high-quality bar.

We found that setting yourself up for success when it comes to listening to your employees involves two major aspects: Developing and deploying a system for gathering employee sentiment about the product, and then creating a system for analyzing that feedback and funneling the findings back to your IT team.

Some of the types of feedback and methods we used to gather it during the development process included:

  • User-testing data
  • User satisfaction ratings
  • User surveys, interviews and other research
  • Voice of the customer (in-product feedback)
  • Pilot projects and focus groups (smaller segments of users)
  • IT support incidents
  • Usage data and telemetry
  • Community-based early adopter feedback (similar to our Copilot Champs community)
  • Social media feedback and comments

You can choose from among these options to set up your own feedback mechanisms, or come up with something customized to your implementation.

Calibrating your usage goals

Remember that the Employee Self-Service Agent is not an all-purpose AI tool like Microsoft 365 Copilot, which your employees might use a dozen times a day. Instead, they may only need assistance from HR or IT support, tools, and information sources a few times a week (or even less). Your usage targets should be calibrated accordingly.

At the same time, the more categories of assistance you add to the agent, the more your usage levels can grow—along with user expectations.

When we decided to add campus support (dining, transportation, and facilities-related needs and queries), one of the motivators was to provide information that users might need on a more regular basis. This addition helped us increase adoption and build daily usage habits for the agent among our employees.

Making the agent your front door for employee assistance

Your employees may have longstanding habits around the ways that they seek assistance, such as moving quickly to email a service request, or immediately engaging a live support technician. There might even be someone helpful in the office next to them that they lean on for IT support. We’re aware that breaking such habits can be a challenge.

That’s why we decided to change our own employee-assistance workflows. In the case of HR, we are planning to remove the option to email a centralized alias for help, which was the default in the past. This forcing function will instead prompt our employees to turn to the agent first for assistance, creating a “front door” for all our HR service requests.

For our IT support function, we are switching from a Virtual Agent chatbot to the Employee Self-Service Agent, which should provide users with a richer experience and a higher rate of resolution.

Of course, our main goal is for the agent to handle an employee’s issue without having to seek further assistance. But what happens when the agent cannot resolve their problem or handle their request? That’s why we’ve also implemented a “smooth handoff”—either to create a service request or connect the user to a live agent for specialized assistance.

There are three key steps in this process:

  1. The Employee Self-Service Agent can identify when the user has reached a point where they need to move to a higher level of assistance via a live agent or a service request. (Note that we also allow the employee to make that determination for themselves.)
  2. We then give them different options for how they want to connect to live support.
  3. When the employee is transferred to a live technician, the Employee Self-Service Agent is able to pass on the chat history from its session with the user. That way, the technician or staff support can quickly get up to speed on the situation, see what the employee has already asked about and tried, and start helping them immediately.

Enabling the employee to quickly and smoothly transition to a higher level of support without leaving the chat increases user satisfaction and makes them more likely to return to the agent the next time they need assistance.

Strategic outreach to employees

Of course your workers, like ours, are busy with their day-to-day job functions. They may be resistant to trying a new tool or going through special training on how to access employee assistance. Or they may just not know about it.

Because of our regionally phased rollout of the agent, email was one of the most effective tools we used to connect with specific audiences and make them aware of the tool. With specific email lists, we could make sure that only employees in that phase of the rollout were seeing the message.

A key aspect of getting our employees to adopt any new tool is reinforcement—the process of sustaining behavior change by providing ongoing incentives, recognition, and support. Some of the reinforcement strategies we used for the agent included:

  • Targeted communications: Emails and organizational messages invited employees to try the agent as they received access
  • Multi-channel campaigns: Promotion of the agent via portals, newsletters, digital signage, and more to keep it at the forefront of employee minds
  • Training: Workshops and micro-learning sessions about the agent
  • Social campaigns: Posts highlighting the tool to increase awareness and gather employee feedback (see details below)
  • Leadership support: Managers modeled usage of the agent and promoted it regularly
  • Processes: The tool was part of regular employee workflows
An example of a fun Viva Engage post that our internal communications team created to encourage daily usage of the Employee Self-Service Agent during the holiday season.

One very important communications channel that we used in our adoption efforts was Microsoft Viva Engage. We set up a private Engage community for the Employee Self-Service Agent, then populated it with each new wave of users as they were given access to the tool (eventually all were given access when the tool went companywide).

We used this channel for various kinds of messaging:

  • General product awareness
  • Updates on new or changing functionality
  • Answering questions or addressing frustrations (two-way dialogue between users and the product team)
  • Fun and helpful “tips and tricks” that users could try (these could come from the product team, leadership, or individual product “champions”)

We also inserted messages about the new agent into our regular communications with different audiences, including HR professionals, IT support personnel, and internal comms staff at the company. And we regularly messaged company leaders about it, so they could encourage their teams and direct reports to support the effort and evangelize for the tool.

One thing we did was make clear to our employees that even though the agent was not able to handle an issue today, it might be able to in a month or two. That’s why ongoing communications to users was important.”

Prerna Ajmera, general manager, HR digital strategy and innovation

Of course, as a natural language chat tool, the Employee Self-Service Agent doesn’t require formalized training. The product itself is designed to guide users and allow them to experiment, simply by stating their needs in plain language. Most employees will already be familiar with AI tools like Microsoft 365 Copilot, so effectively using an AI-powered employee-assistance agent should be a low bar to clear.

Managing expectations

Your Employee Self-Service Agent rollout will be an ongoing journey as you add topic areas, functionalities, and other product features. Your product roadmap will evolve as you learn more about what your employees need with this kind of AI solution.

One factor to consider is how to set realistic user expectations about what the agent can do while the product matures and improves. As we gradually rolled out the tool, we messaged that the agent was in “early preview,” which helped avoid employee disappointment when it couldn’t handle a specific request.

“One thing we did was make clear to our employees that even though the agent was not able to handle an issue today, it might be able to in a month or two,” Ajmera says. “That’s why ongoing communications to users was important, as new capabilities were added and speed and accuracy improved.”

We also created messaging for early users indicating that their testing was an integral part of making the tool more effective. This created a positive feedback loop while also keeping employee expectations reasonable.

How we measured success

Carefully tracking and analyzing your success metrics throughout your development and release of the product is a high priority. Without this step, you are working in the dark.

At Microsoft, we identify the key performance indicators (KPIs) for a particular product and then use them as our North Star for any internal release. But the specifics of those KPIs can vary from product to product.

Graphic shows the improved success rates that employees have when seeking assistance from the Employee Self-Service Agent versus traditional support channels.
Early results from our internal deployment of the Employee Self-Service Agent showed marked increases in success rates when users sought assistance from an AI tool as compared with existing support channels.

For example, measuring the monthly average user (MAU) statistics might be extremely important for an all-purpose productivity tool like Microsoft 365 Copilot. But for an employee-assistance tool, the goal is not necessarily regular use, because employees aren’t constantly facing challenges that require help (we hope). Usage statistics may also be affected by certain events or cyclical needs, such as annual employee reviews or a major technology change (like a significant Windows update).

With this in mind, we identified certain key metrics for the Employee Self-Service Agent. In this case, the top KPIs included:

  • Percentage of support tickets deflected
  • Net satisfaction score
  • Latency period
  • Reliability
  • Total time savings
  • Total cost savings
  • Identified and prioritized issues (reported back to product group)

Overall, we focused on the rate at which employees were able to resolve issues without opening a support ticket, as this would likely generate the greatest return on time and cost savings. We came up with an overall target across the different verticals of 40% ticket deflection, and we’re making solid progress toward this goal as we continue to refine and improve the agent.

Part of our measurement process is a monthly progress meeting of key project stakeholders, where all KPIs are evaluated to see if our targets are being met. If the results do not meet expectations, we identify the potential causes and discuss what adjustments need to be made to address these shortfalls.

Key takeaways

Here are some key things to remember when it comes to adoption efforts for your Employee Self-Service Agent:

  • Don’t reinvent the wheel. Most of your change management and adoption strategies for the agent will be the same across different regions and help categories.
  • Line up product sponsors. Finding leaders and others across the organization to help you promote the Employee Self-Service Agent within their own groups, functions, and regions can make a big difference in gaining employee trust and encouraging adoption.
  • Set up proper listening channels. You’ll want to gather as much feedback as possible from your employees as you roll out the agent so you can understand what is working well and what needs improvement. This kind of feedback loop can also make your employees feel heard and help them shape the tool.
  • Make the shift to agent-first help. Employee habits for seeking assistance can be resistant to change. We decided that turning off the “email to create a service ticket” workflow was a great way to nudge our workers to recognize the agent as the first option for their assistance needs.
  • Be strategic in your communications. Use tools like email, Viva Engage, and other appropriate communications channels to target your communications and encourage a two-way conversation with employees about the agent. Sharing fun tips and encouraging peer support are other ways to increase awareness and engagement with product.
  • Identify your key metrics. We determined our benchmarks for success for this particular type of agent, then tracked them and made the results available to key stakeholders. This allowed us to measure the impact and effectiveness of the product.

Learn more

How we did it at Microsoft

Although some of the blog posts below are about adoption efforts related to Microsoft 365 Copilot, they can give you ideas on how we promote internal adoption of agentic AI products at Microsoft.

Further guidance for you

Begin your journey with the Employee Self-Service Agent

Agentic AI offers incredible promise to transform employee productivity, giving individuals access to powerful tools that enable them to accomplish more. We believe the Employee Self-Service Agent is another step along that path, allowing workers to get instant help with tasks that used to be cumbersome and time-consuming.

Photo of Fielder

“We’re excited to get the Employee Self-Service Agent out and into the hands of our customers, so that they can reap the same benefits that we’re already seeing from it. As we continue to refine the product and expand the number of verticals it can cover, we expect to realize exponential efficiency gains and capture even more cost savings across our entire organization.”

Now that you’ve read about our experience deploying the tool, it’s time to start your own journey. Successful implementation means your people will spend less time on the phone with support staff or hunting through web pages and other resources for help with routine employment tasks and more time devoted to their productive work, reducing job-related pain points and frustrations.

You can benefit from the lessons we’ve learned and the many helpful features and capabilities that we’ve built into this product, all of which are designed to make your implementation as fast, easy, and effective as possible.

“We’re excited to get the Employee Self-Service Agent out and into the hands of our customers, so that they can reap the same benefits that we’re already seeing from it,” says Brian Fielder, vice president of Microsoft Digital. “As we continue to refine the product and expand the number of verticals it can cover, we expect to realize exponential efficiency gains and capture even more cost savings across our entire organization.”

Key takeaways

Here are some of the essential top-level learnings we gleaned from our deployment of the Employee Self-Service Agent, which you should keep in mind as you start out on your own deployment path:

  • Identify and engage the right people. You’ll need buy-in and advocacy from leaders across the organization; the involvement of key stakeholders from HR, IT, legal, and compliance; and technical guidance from admins, license administrators, environment makers, and knowledge-base subject matter experts.
  • Develop your plan. Understand the major phases of governance, implementation, and adoption of the tool, and make sure that you have adequate resources and support for each phase.
  • Verify the quality of your content. Your chances of success will be better if you undertake a thorough content assessment to address the currency, accuracy, and structure of all relevant knowledge bases. Pay particular attention to the topics and tasks that are in greatest demand by employees when they access help services.
  • Consider a phased rollout. Releasing your Employee Self-Service Agent to progressively larger groups of workers across your organization allows you to gather data and feedback and improve the performance and relevance of the agent over time. You can also expand the number of categories that your agent covers as you go, increasing the impact and appeal of the tool.
  • Communicate strategically to promote adoption. Convincing employees to break longstanding habits when seeking help is a challenge. Email is helpful for targeting specific groups of employees, but be sure to use tools like Viva Engage to create community, answer questions, provide fun tips and tricks, and announce new capabilities and options.
  • Set clear goals and measure against them. Come up with a targeted set of KPIs that reflect your organization’s needs and aspirations, then develop a plan to capture data for each of these indicators and a regular reporting cadence to keep stakeholders informed of progress toward your goals.

Learn more

How we did it at Microsoft

Try it out

We’d like to hear from you!

The post Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success appeared first on Inside Track Blog.

]]>
22492
Shaping AI management at Microsoft with Agent 365 and Copilot controls http://approjects.co.za/?big=insidetrack/blog/shaping-ai-management-at-microsoft-with-agent-365-and-copilot-controls/ Mon, 09 Mar 2026 13:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22560 AI is moving fast at Microsoft. Every month, we’re discovering new ways that our employees are using Microsoft 365 Copilot and rapidly emerging agentic tools to work smarter, automate routine tasks, and unlock new patterns of productivity. As our ecosystem of AI tools expands, so does our responsibility and opportunity. We have to guide the […]

The post Shaping AI management at Microsoft with Agent 365 and Copilot controls appeared first on Inside Track Blog.

]]>
AI is moving fast at Microsoft. Every month, we’re discovering new ways that our employees are using Microsoft 365 Copilot and rapidly emerging agentic tools to work smarter, automate routine tasks, and unlock new patterns of productivity.

As our ecosystem of AI tools expands, so does our responsibility and opportunity. We have to guide the process with the right structure, clarity, and confidence.

A photo of Fielder.

“With Agent 365, IT leaders can confidently embrace this innovation through a unified control plane that provides the capabilities that enterprises need to ensure agents are governed, observable, and secure—regardless of which tools, frameworks, or models were used to create them.”

Brian Fielder, vice president, Microsoft Digital

We approach the governance of AI as a task we’re shaping in real time while observing the different ways our people are using AI in their daily work.

That’s the advantage of being Customer Zero here in Microsoft Digital, the company’s IT organization. We’re living this transformation across Microsoft 365 every day, evolving our governance model alongside the evolution of AI and agents.

“With Agent 365, IT leaders can confidently embrace this innovation through a unified control plane that provides the capabilities that enterprises need to ensure agents are governed, observable, and secure—regardless of which tools, frameworks, or models were used to create them,” says Brian Fielder, vice president of Microsoft Digital.

Our governance approach is built around two complementary control planes: Microsoft Agent 365 for agents and Copilot controls for Microsoft 365 Copilot.

A photo of Johnson.

“We’ve seen the rapid pace of innovation firsthand. As Copilot evolves and agents expand, the control planes we use must evolve also. New AI and agent capabilities raise the bar for governance and management, so at Microsoft Digital, we’re working with our product teams to evolve the management to keep the company secure, informed, and ready for whatever comes next.”

David Johnson, principal architect, Microsoft Digital

These control planes are supported by the four fundamental concepts that we apply to every enterprise system we operate: security, governance, management, and observability.

“We’ve seen the rapid pace of innovation firsthand,” says David Johnson, principal architect in Microsoft Digital. “As Copilot evolves and agents expand, the control planes we use must evolve also. New AI and agent capabilities raise the bar for governance and management, so at Microsoft Digital, we’re working with our product teams to evolve the management to keep the company secure, informed, and ready for whatever comes next.”

This model gives us a consistent way to support new capabilities, encourage responsible experimentation, and help our employees adopt AI and agents with fewer hurdles.

Expanding our AI governance practices

As AI use evolves within our organization, we’re seeing clear patterns emerging. Copilot goes well beyond chat. It can execute tasks, create and modify content directly inside apps, connect systems, and coordinate multi‑step work through agents. The AI ecosystem is becoming more effective at boosting productivity with model choices, agent-to-agent orchestration, and agent mode within applications that leverage natural language to complete tasks.

These patterns are exciting, move fast, and expand how we think about governance.

The shift became clear as teams across Microsoft began experimenting with new AI capabilities in the last few years. Accelerating Copilot usage showed us how quickly people adopt tools to help them work better and faster. Rapid agent growth showed us how much value workers get when AI takes on more complex, multi‑step tasks. These expansions pushed us to evolve our security, governance, and management approaches alongside the technology.

That’s what led us to define two complementary control planes for Copilot and agents—not because one replaces the other, but because they serve complementary roles in the ecosystem. Copilot goes beyond chat, surfacing intelligence directly inside apps, workflows, and context to help people work smarter in the flow of their apps. Agents take on broader responsibilities across services, teams, and data boundaries.

By recognizing the different types of work that Copilot and agents do, we’re better equipped to manage and govern them. We can apply consistent principles, tailor the controls to each type of tool, and give employees a clearer understanding of how each AI capability behaves. It’s an approach that grows with technology, instead of forcing everything into a single frame.

Building governance on foundational pillars

As Copilot and agents expand across Microsoft 365 and the rest of our product offerings, we’ve anchored our approach on the fundamentals of security, governance, management, and observability. These principles have shaped our enterprise systems for years. What’s changing is how we apply them to a fast‑moving AI ecosystem.

Security and governance

Security and governance are the baseline for us at Microsoft. Every new capability—whether it’s Copilot helping you draft, find, or create content, or an agent running an automated workflow—must adhere to security and governance principles.

A photo of Powers.

“The Microsoft 365 admin center is becoming the place where controls come together. Policies, observability, and configuration are in a single experience, so admins don’t have to hunt across multiple portals. That consolidation makes it easier for us to understand how AI is behaving in our tenant and what controls we have available to guide it.”

Mike Powers, senior systems engineer and AI admin, Microsoft Digital

Products like Microsoft Purview and Defender allow us to better understand what data our AI tools are accessing, for how long, and where additional guardrails might be needed as features and usage evolve.

Management

Management completes the foundation, and measurement is how we track our progress.

As AI tools take on more responsibility, we needed a unified way to manage access, lifecycle, and configuration. Agent 365 is evolving the Microsoft Admin Center to serve as a central focal point for agent management and observability. Agent 365 brings together agent information and controls that were previously scattered across different admin experiences and puts them in one coherent place.

“The Microsoft 365 admin center is becoming the place where controls come together,” says Mike Powers, a senior systems engineer and AI admin in Microsoft Digital. “Policies, observability, and configuration are in a single experience, so admins don’t have to hunt across multiple portals. That consolidation makes it easier for us to understand how AI is behaving in our tenant and what controls we have available to guide it.”

It’s how we track adoption, quality, and business value like time saved and reduction in operational costs. It’s how we identify what’s working, where to invest next, and how we can guide product teams with real‑world insights. We look carefully at active agents, usage patterns, assisted hours, sentiment, and the outcomes our people achieve with AI. Different audiences share the same goal: using telemetry to make AI better.

Together, these principles allow us to evolve our governance model without slowing innovation. They give us a steady foundation in a rapidly expanding environment—one where Copilot and agents will continue to grow, intersect, and unlock new ways of working.

Observability with Microsoft Agent 365

The widespread use of agents is an accelerating trend here at Microsoft. We use them to automate multi‑step tasks, build applications in plain language, connect systems, and streamline work that previously depended on manual coordination.

As the number of agents grows and becomes more autonomous, we need a management approach that matches their scale and autonomy. That’s what Microsoft Agent 365 gives us—a control plane designed for AI and agentic workloads that operate across platforms and traditional admin boundaries.

Agent 365 provides a registry for agents that lets us discover and understand how agents behave across Microsoft 365. It shows us who built them, who can use them, and what data they can access. From a single admin console, we can observe and manage agents created across different platforms. Day to day, Agent 365 gives AI admins agent observability we didn’t have before, and a way to connect insight to action.

“Agents represent a significant and growing workload that tenant administrators manage as part of day‑to‑day operations,” Powers says. “Agent 365 helps bring clarity to a diverse and rapidly scaling agent population by providing a centralized place to observe and manage how agents operate. This centralized approach is bringing together admin teams like never before so we can apply broad expertise to agent management.”

That clarity matters.

Agents behave differently than Copilot experiences. They can run continuously, trigger processes automatically, and touch systems across organizational boundaries. By treating them as advanced workloads, we can apply governance that supports experimentation without losing control over the ecosystem.

Agent 365 gives teams the confidence to build agents, knowing there’s a clear, consistent framework behind them. It helps ensure agents scale responsibly, are discoverable, and align to the enterprise patterns that keep Microsoft secure and productive.

Keeping track of Copilot controls

We rely on Copilot controls to give us a unified way to govern how different Copilot experiences show up for employees.

Copilot controls aren’t a single product. It’s a fabric of controls, insights, and guardrails that help us guide Copilot usage as it grows. It brings together settings, reports, and policies that once lived across separate admin surfaces and connects them into one coherent system.

A photo of Ceurvorst.

“Copilot controls bring everything into one place, so admins don’t have to jump across different reports. It gives them a holistic view of Copilot health. That includes licenses, sentiment, usage, and recommendations. It’s everything they need to understand how Copilot is working in our tenant.”

Amy Ceurvorst, direct of business programs, Microsoft Digital

At its core, Copilot controls help us manage three things:

  • Who has access
  • How the experience is configured
  • How we measure adoption and value

It’s how we track whether licenses are assigned as expected, whether teams are using Copilot regularly or occasionally, and where configuration gaps may exist. It also recommends changes that can make Copilot more effective and secure.

As Copilot evolves, our Copilot controls will evolve with it. New features, security patterns, and use cases all plug into the same foundation. That gives admins a rhythm they can rely on, even as the technology continues to move rapidly.

It also gives business leaders clearer visibility into how Microsoft 365 Copilot helps people work—how often it’s used, what tasks it supports, and where impact shows up.

“Copilot controls bring everything into one place, so admins don’t have to jump across different reports,” says Amy Ceurvorst, a director of business programs in Microsoft Digital. “It gives them a holistic view of Copilot health. That includes licenses, sentiment, usage, and recommendations. It’s everything they need to understand how Copilot is working in our tenant.”

That clarity is critical. It helps us guide Copilot responsibly without slowing its momentum. It gives our admins confidence in how the experience behaves. It gives our engineering teams the feedback they need to keep improving the platform. And it gives our employees a secure, well‑governed environment where they can adopt Copilot at their own pace.

Applying Agent 365 and Copilot controls as Customer Zero

We use Agent 365 and Copilot controls every day. They help us understand what AI is doing inside Microsoft, how these tools are evolving, and where we need to focus our efforts next.

These systems give us visibility we didn’t have a year ago, as well as a way to move faster without losing alignment across security, IT, and business teams.

A photo of Roberts.

“Measurement tells us what’s really happening. It shows us where people are finding value and where they need help. We can see the friction points, the successful patterns, and the opportunities that aren’t obvious from the surface. Having that level of insight lets us give the product team clear, actionable feedback.”

Tanya Roberts, senior business program manager, Microsoft Digital

Understanding how agents perform in the real world is essential. With Agent 365, we look at what’s being created, what’s actively being used, and which workflows people rely on most. We review how agents are scoped and published, and we check whether they’re operating as expected. These signals help us see emerging patterns—what’s gaining traction, what’s causing confusion, and where we need clearer controls.

The same applies to Copilot.

Copilot controls give us a consolidated view of how Copilot appears across the tenant—licenses, usage, sentiment, and recommended configuration changes. We use that data to advise product groups, flag issues early, and help business teams to adopt Copilot in ways that make sense for their work. Internally, these insights reduce friction. Externally, they help shape the product.

Cross‑team collaboration is essential. Security teams watch for data exposure risks. IT teams manage configuration and rollout. Business units surface scenarios they want to enable. We coordinate across all these groups so Copilot and agents can scale smoothly.

Measurement ties it all together.

“Measurement tells us what’s really happening,” says Tanya Roberts, a senior business program manager in Microsoft Digital. “It shows us where people are finding value and where they need help. We can see the friction points, the successful patterns, and the opportunities that aren’t obvious from the surface. Having that level of insight lets us give the product team clear, actionable feedback. We can connect the dots between what people are trying to do and what the technology needs to support next.”

This is how we make AI real and practical. We learn from what happens in production, evolve the controls, and feed those lessons back into the product. It’s an ongoing cycle that grows stronger as adoption increases.

Looking forward

The AI landscape isn’t slowing down. Copilot will keep getting smarter and more broadly used across other apps and services. Agents will take on more complex work. And the boundaries between them will continue to blur as new capabilities emerge across Microsoft 365. That’s why our governance model has to evolve alongside the technology.

We’re designing for a future where AI spans more systems, touches more data, and supports more business processes. That means deeper integration between Agent 365 and our Copilot controls; more connected signals across security, management, and measurement; and governance patterns that hold up no matter how AI capabilities shift.

We expect the control planes we use will continue expanding in ways that give admins even more clarity. We’re looking forward to seeing richer telemetry across Copilot and agents. We plan to develop simpler ways to scope, publish, and update AI workloads. And we anticipate more advanced governance features, which will help organizations understand not just what AI is doing, but why it’s doing it.

Our work with Microsoft product teams as Customer Zero will continue to shape this evolution. As part of this process, we can provide real‑world insights about how AI behaves at enterprise scale. That feedback is already influencing how controls show up in the Microsoft 365 admin center and how Agent 365 is expanding to support new workloads. These feedback loops will only get stronger over time.

We’re building our AI management approach into a living system that adapts to new capabilities, new risks, and new opportunities. A system that supports innovation instead of slowing it down. And one that keeps Microsoft—and our customers—confident as the AI stack keeps changing.

Key takeaways

If you’re establishing governance for Copilot and AI agents in your organization, consider these actions to drive responsible, scalable adoption:

  • Start with governance fundamentals. Use security and governance, management, and observability as your pillars before layering in other tools or processes. Many of the same fundamentals that unblock Copilot provide the reason why a tenant can be comfortable with knowledge-only agents. 
  • Understand the unique and intersecting governance paths for Copilot and agents. Both have some of the same fundamentals but Copilot and agents have distinct AI controls, with different responsibilities, risks, and oversight needs.
  • Use measurement to guide decisions. Track usage, value, sentiment, and friction to understand how AI is performing and where you need to refine the experience.
  • Make governance a shared responsibility. Bring together security, IT, business leaders, and product teams to ensure clarity, alignment, and end‑to‑end control.
  • Design governance that evolves. Adopt controls that can adapt as Copilot grows, agents mature, and new AI capabilities enter the stack.
  • Prioritize clarity for builders and admins. Keep patterns simple, make guidance visible, and ensure that controls are easy to understand so your teams can adopt AI confidently.
  • Invest in the AI admin role. Create space for a dedicated AI admin role and skill up AI Admins with deep, cross‑platform expertise, including SharePoint, Power Platform, Azure AI Foundry, Entra identity, and Exchange. Yes, agents will soon have their own mailboxes. In the evolving world of agents, effective administration depends on knowing how agent lifecycle is tied to the platforms where they are created and operate. 

The post Shaping AI management at Microsoft with Agent 365 and Copilot controls appeared first on Inside Track Blog.

]]>
22560
Powering the new age of AI-led engineering in IT at Microsoft http://approjects.co.za/?big=insidetrack/blog/powering-the-new-age-of-ai-led-engineering-in-it-at-microsoft/ Thu, 05 Mar 2026 17:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22539 When generative AI burst into the mainstream, it landed in our IT engineering organization like a shockwave. There was excitement, curiosity, skepticism, and no shortage of questions about what this technology meant for the future of IT. At Microsoft Digital—the company’s IT organization—we didn’t start with a grand transformation plan. Instead, we started with a […]

The post Powering the new age of AI-led engineering in IT at Microsoft appeared first on Inside Track Blog.

]]>
When generative AI burst into the mainstream, it landed in our IT engineering organization like a shockwave.

There was excitement, curiosity, skepticism, and no shortage of questions about what this technology meant for the future of IT.

At Microsoft Digital—the company’s IT organization—we didn’t start with a grand transformation plan. Instead, we started with a realization: AI wasn’t just another tool to roll out. It was a fundamental shift in how engineering work could happen.

For years, our IT teams have been focused on scale, reliability, and operational excellence. Those priorities didn’t change. What changed were the possibilities.

Suddenly, engineers could draft code in seconds, summarize complex systems instantly, or automate work that had once consumed hours or days. It was an opportunity to take the skills and capabilities of our people and amplify them with AI.

That realization forced us to step back and ask harder questions.

How do you help thousands of engineers understand what AI can actually do to impact their day-to-day work? How do you move from experimentation to trust? And how do you adopt AI in a way that strengthens engineering fundamentals instead of eroding them?

The answer came in the form of a phased journey grounded in people, culture, and continuous learning.

Phase 1: Awareness and access

It might sound surprising when speaking about engineering processes, but our first challenge wasn’t technology; it was understanding.

When generative AI entered the conversation, most engineers saw the headlines and dabbled in various tools, but few understood fully what it meant for their work. Some were excited, others were wary. Many simply didn’t know where to start. That gap between awareness and practical value was the first barrier we had to address.

We realized early that top-down mandates wouldn’t work. Telling engineers to “use AI” without context or relevance would only deepen skepticism. Instead, we focused on something both simpler and more difficult: Exposure.

We started by making AI visible and accessible in the tools engineers already used. GitHub Copilot. Microsoft 365 Copilot. Early copilots embedded directly into engineering workflows. The goal wasn’t immediate productivity gains. It was familiarity. Letting engineers see, firsthand, what AI could and couldn’t do.

A photo of Singhal.

“We encouraged tool usage and adoption so people would at least play around with AI. And once they did, they started seeing the value. That’s when the mindset shifted from ‘AI might replace me’ to ‘AI can be my companion.’”

Mukul Singhal, partner group engineering manager, Microsoft Digital

Just as important, we talked openly about limitations.

AI wasn’t perfect. It hallucinated. It made confident mistakes. And that honesty mattered. By framing AI as an assistant, we reinforced the role of engineering judgment. Engineers didn’t need to fear losing control. They needed to understand how to stay in control.

We also made experimentation safe.

No quotas. No forced adoption metrics. Engineers were encouraged to try AI on low‑risk tasks: summarizing documentation, generating test cases, or exploring unfamiliar codebases. Small wins built confidence, confidence built curiosity, and curiosity drove organic adoption.

As that experimentation took hold, the mindset began to shift.

“We encouraged tool usage and adoption so people would at least play around with AI,” says Mukul Singhal, a partner group engineering manager in Microsoft Digital. “And once they did, they started seeing the value. That’s when the mindset shifted from ‘AI might replace me’ to ‘AI can be my companion.’”

Over time, conversations changed from ‘Should we use AI?’ to ‘Where does AI help most?’

Engineers began sharing prompts, tips, and lessons learned with one another. What started as individual exploration turned into community learning. Awareness gave way to momentum.

Phase one was about providing access to explore, to question, and to learn. And that foundation made everything that followed possible.

Phase 2: Culture shift

Access created awareness and awareness created curiosity.

As more engineers began experimenting with AI, we noticed a pattern. Some teams were moving faster, learning faster, and reducing friction in their day‑to‑day work. Others stalled after initial trials. The difference wasn’t technical skill or capability, it was mindset.

A photo of Mamilla.

“People started shifting from the mindset of ‘Will AI work?’ to ‘AI is working for me.’ I think that was a very transformational shift, to where I believe a lot of engineers in the organization started believing in AI.”

Veera Mamilla, principal group engineering manager, Microsoft Digital

To move forward, we had to shift how AI was perceived from something optional or experimental to something that was simply part of how modern engineering gets done.

That meant normalizing AI as a trusted partner in the engineering process.

Leaders played a critical role in that shift. Rather than positioning AI as a productivity shortcut, they framed it as a way to strengthen engineering fundamentals: clearer design discussions, better documentation, faster feedback loops, and more time for deep problem‑solving. The message was intentional and consistent. Using AI wasn’t about cutting corners, it was about reimagining how work gets done.

We also had to address a fear that surfaced early: that AI adoption was a signal of replacement rather than empowerment.

“People started shifting from the mindset of ‘Will AI work?’ to ‘AI is working for me,’” says Veera Mamilla, a principal group engineering manager in Microsoft Digital. “I think that was a very transformational shift, to where I believe a lot of engineers in the organization started believing in AI.”

That framing mattered.

As engineers incorporated AI into their workflows, success stopped being measured by output alone. The focus shifted to outcomes. Did AI help you understand a system faster? Did it surface risks earlier? Did it free up time to focus on higher‑value work?

Over time, AI stopped feeling like a novelty. It became part of the engineering fabric. We reinforced it through leadership modeling, peer learning, and shared success stories. Teams no longer asked whether AI belonged in their workflows. They asked how to use it responsibly and effectively.

Phase 3: Upskilling and role evolution

Once AI moved from curiosity to expectation, the challenge of skill building became unavoidable.

From the start, we made a deliberate choice: This would be an upskilling and reskilling journey, not a wholesale replacement of roles. The goal wasn’t a new workforce. It was an investment in the one we had.

That decision shaped everything that followed.

Early upskilling efforts focused on practical entry points. Prompt engineering. Tool literacy. Understanding how copilots and early agents behaved in real engineering workflows. We treated these as something every engineer needed to experiment with, regardless of discipline.

But it quickly became clear that skills alone weren’t the full story. Roles themselves were starting to evolve.

A photo of Singh.

“Your title might still be software engineer or principal engineer. But if you’re acting like an AI engineer, what does that actually mean? That question helped us start defining how these roles were evolving.”

Ragini Singh, partner group engineering manager, Microsoft Digital

Across software development, service engineering, and cloud network engineering, the work was shifting from manual execution toward orchestration and oversight. Engineers were no longer expected to do every task end‑to‑end by hand. Instead, they were learning how to guide AI, review its output, and decide where automation made sense and where it didn’t.

As part of this shift, we began researching how the industry itself was redefining engineering roles. Leaders examined emerging job descriptions from across the market and compared them with Microsoft’s own role frameworks. At the time, there was no formal “AI engineer” role in the internal job library. Rather than creating a new title, the focus stayed on evolving expectations within existing roles.

The idea of an “AI‑native engineer” emerged not as a job description, but as a mindset.

An AI‑native engineer still understands systems, architecture, and risk. What’s different is how that expertise gets applied. Routine tasks are delegated to AI. Judgment, design, and accountability stay with the human. Engineers move from doing all the work themselves to supervising work done in partnership with AI.

“Your title might still be software engineer or principal engineer,” says Ragini Singh, a partner group engineering manager in Microsoft Digital. “But if you’re acting like an AI engineer, what does that actually mean? That question helped us start defining how these roles were evolving.”

This evolution looked different across disciplines. Software engineers focused on AI‑assisted coding, test generation, and spec‑driven development. Service engineers leaned into AI for incident response, knowledge capture, and operational decision support. Cloud network engineers began moving from manual intervention toward intelligent orchestration and agent‑assisted troubleshooting. The common thread wasn’t identical tooling, it was a shared shift toward higher‑order work and reduced toil.

Phase 4: Embedding AI across the engineering lifecycle

By this phase, we knew individual productivity gains were simply the starting point for larger and broader benefits.

Early on, most AI usage showed up in familiar places: Code suggestions, documentation summaries, quick answers. Useful, but fragmented. The bigger opportunity emerged when we stepped back and asked a harder question: What would it look like if AI were embedded across the entire engineering lifecycle, not just used at isolated moments?

We stopped thinking in terms of tools and started thinking in terms of flow. Design. Build. Test. Deploy. Operate. Improve. AI needed to show up across all of it, in ways that reinforced how engineers already worked.

A photo of Sadasivuni.

“If AI is only showing up at one step, you don’t get the full value. The real impact comes when it’s integrated across the lifecycle, where engineers can design, build, operate, and learn faster as a system.”

Sudhakar Sadasivuni, principal group engineering manager, Microsoft Digital

In software engineering, that meant pulling AI earlier into the process. We began using it to help draft requirements, reason through design options, and review code with broader system context to accelerate how quickly we could get to informed decisions. Coding assistance mattered, but it was no longer the center of gravity.

Testing and quality followed a similar pattern. AI supported test generation, defect analysis, and code review, reducing repetitive effort and helping issues surface sooner. That gave engineers more time to focus on quality and architecture instead of cleanup.

In service engineering, we embedded AI into incident management and operational workflows. Engineers used it to summarize incidents, surface relevant knowledge, and analyze signals across systems. In cloud network engineering, AI helped shift work away from manual intervention toward orchestration and intelligent troubleshooting. Across disciplines, the principle stayed the same: AI should reduce friction, not introduce it.

As we scaled this approach, one thing became clear. Embedding AI wasn’t just a technical exercise. It was a systems change.

“If AI is only showing up at one step, you don’t get the full value,” says Sudhakar Sadasivuni, a principal group engineering manager in Microsoft Digital. “The real impact comes when it’s integrated across the lifecycle, where engineers can design, build, operate, and learn faster as a system.”

As AI became part of core workflows, engineers remained accountable for outcomes. AI output was reviewed, tested, and validated like any other engineering input. Embedding AI didn’t lower the bar for rigor. It raised expectations around judgment, oversight, and data quality. We became more deliberate about responsibility and governance.

Over time, these integrations created compound benefits.

Faster design cycles reduced downstream rework. Better testing lowered operational noise. Improved operational insight shortened recovery times. AI stopped being something we used occasionally and became something the engineering system itself was built around.

Phase 5: Eliminating toil and accelerating outcomes

At some point, every AI story hits the same test. Does it actually make engineers’ days better? For us, that proof showed up fastest in elimination of toil.

Across Microsoft Digital, engineers have always spent time on work that was necessary but draining. It included tasks such as manual troubleshooting, repetitive diagnostics, log analysis, and routine operational tasks that kept systems running but didn’t move the organization forward.

AI gave us a chance to change that.

A photo of Garrison.

“Toil reduction is the biggest thing. That’s where engineers’ eyes light up. If we can eliminate toil, people engineers will flock to use AI. I really believe it.”

Beth Garrison, principal cloud network engineer, Microsoft Digital

In cloud network engineering, for example, troubleshooting used to require manually reconstructing what happened, such as logging into devices, chasing configurations, and piecing together context after the fact. As we began introducing agents and machine learning into these workflows, that work shifted. Instead of spending time assembling the picture, engineers could generate the views they needed faster and focus on resolving issues.

The same shift showed up in how we used operational data.

Rather than reacting to incidents after impact, we started using machine learning to analyze logs, identify patterns, and surface anomalies earlier. That moved teams from reactive response toward proactive monitoring and prevention.

One thing became clear very quickly: Toil reduction wasn’t just a benefit; it was the catalyst for adoption.

“Toil reduction is the biggest thing. That’s where engineers’ eyes light up,” says Beth Garrison, a principal cloud network engineer at Microsoft Digital. “If we can eliminate toil, people engineers will flock to use AI. I really believe it.”

Service engineering followed a similar arc.

Across governance, operations, productivity, and cost management, we began applying agents and automation to simplify complex work and reduce manual review cycles. Governance and compliance workflows became faster and more consistent. Operational processes benefited from guided remediation and earlier insight. Knowledge capture improved as documentation and remediation guidance could be generated and updated automatically.

When we removed repetitive work such as manual triage, rote diagnostics, endless documentation cleanup, we transformed how engineers spent their time. More focus on design. More proactive problem‑solving. More energy directed toward improving systems instead of just maintaining them.

Toil reduction made the value of AI tangible. It’s the moment AI stopped being interesting and became indispensable, and our engineering teams started asking where else we can apply it next.

Measuring what matters

By the time AI was embedded across our engineering lifecycle, a new question came into focus: “How do we know it’s working?”

In the early days, we paid close attention to usage. Which tools engineers were trying, where adoption was growing, or where it stalled. Those signals mattered and adoption was the leading indicator that people were getting comfortable and starting to integrate AI into real work.

“Adoption was always the starting point. But we were clear from the beginning that usage isn’t the destination. The real goal is impact; more time for engineers to focus on the work that truly matters.”

Ullas Kumble, principal group software engineering manager, Microsoft Digital

But using AI doesn’t automatically mean better outcomes. So, we shifted the conversation and started asking, “What’s different now that our engineers are using AI?”

That change reframed how we thought about measurement. We began looking beyond tool activity to understand impact across the engineering system. Faster design cycles. Earlier defect detection. Reduced time spent on repetitive operational work. Shorter incident resolution. Clearer documentation. Fewer handoffs. Less rework.

These weren’t abstract metrics. They showed up in the flow of work.

We were intentional about not forcing a single definition of value across every role. Software engineers, service engineers, and cloud network engineers experience impact differently. What mattered was that each team could point to tangible improvements in how work moved through the system.

That perspective shaped how leadership talked about success.

“Adoption was always the starting point,” says Ullas Kumble, a principal group software engineering manager at Microsoft Digital. “But we were clear from the beginning that usage isn’t the destination. The real goal is impact; more time for engineers to focus on the work that truly matters.”

Over time, this approach changed the quality of our conversations. Instead of debating whether AI was worth the investment, teams talked about where it was removing friction and where it still wasn’t delivering enough value. Measurement became a tool for learning and prioritization.

Moving forward

Looking ahead, one lesson stands out: this journey isn’t complete.

AI tools will continue to evolve. Agents will become more capable. Roles will keep shifting. What it means to be an engineer will continue to change. And that means our approach must stay grounded in the same principles that guided us from the start: invest in people, reinforce fundamentals, embed AI into real workflows, and stay honest about what’s working and what isn’t.

We didn’t set out to build an AI‑driven engineering organization overnight, we built it phase by phase.

By meeting engineers where they were
By reshaping culture before redefining roles.
By embedding AI across the lifecycle, not bolting it on.
By reducing toil and measuring impact where it mattered most.

The result is better engineering: powered by AI, guided by human judgment, and built to keep evolving.

Key takeaways

Here’s a set of approaches you can take to establish AI-led engineering for your organization:

  • Start with access and understanding. Give engineers safe, easy access to AI in the tools they already use so curiosity and confidence can develop organically before you push for outcomes.
  • Frame AI as a partner, not a replacement. Position AI as an assistant that strengthens engineering judgment and fundamentals rather than a shortcut or a threat to roles.
  • Normalize experimentation without pressure. Encourage low‑risk experimentation and peer sharing instead of mandates, allowing adoption to grow through visible, practical wins.
  • Invest in upskilling. Focus on evolving skills and expectations within existing roles so engineers learn how to guide, review, and stay accountable for AI‑assisted work.
  • Embed AI across the full engineering lifecycle. Look beyond isolated productivity gains and integrate AI into design, build, test, operate, and improve workflows to unlock system‑level impact.
  • Measure impact where engineers feel it. Move past usage metrics and track outcomes like reduced toil, faster feedback, and improved flow so teams can see where AI is truly making work better.

Try it out

Try GitHub Copilot.

The post Powering the new age of AI-led engineering in IT at Microsoft appeared first on Inside Track Blog.

]]>
22539
The Frontier Firm: How knowledge workers are forging their own AI tools at Microsoft http://approjects.co.za/?big=insidetrack/blog/the-frontier-firm-how-knowledge-workers-are-forging-their-own-ai-tools-at-microsoft/ Thu, 05 Mar 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22549 Knowledge workers have all been there. Maybe you’re a product manager with a backlog that you can’t ever get to. Perhaps you’re a designer who can never seem to get engineering resources assigned to you. Or maybe you’re a program manager who routinely gets stuck copying data between systems by hand. Engage with our experts! […]

The post The Frontier Firm: How knowledge workers are forging their own AI tools at Microsoft appeared first on Inside Track Blog.

]]>
Knowledge workers have all been there.

Maybe you’re a product manager with a backlog that you can’t ever get to. Perhaps you’re a designer who can never seem to get engineering resources assigned to you. Or maybe you’re a program manager who routinely gets stuck copying data between systems by hand.

These are common challenges knowledge workers face everywhere, including here at Microsoft. A year ago, AI enthusiasts knew agents with tools could fix these problems—they just didn’t know where to start.

Some of our employees in Microsoft Digital, the company’s IT organization and Customer Zero for the company, took a grassroots approach to solving this problem. They built something called the Frontier Forge, our pro‑code “harness” that enables our less-technical employees to get work done with agents. They use it to quickly build agentic instructions and instantly share their solutions with peers, which accelerates our productivity across the company.

The Frontier Forge represents a cultural shift in how our product managers, designers, program managers and other “I’m not an engineer but I want to build stuff” employees now apply AI tools directly to their work.

What first began as a hackathon experiment has evolved into a thriving Microsoft-internal community with nearly 100 engaged contributors, an active Teams channel, and a GitHub repository filled with templates, learning modules, and ready-to-use AI agents. The impact is measurable: Forecasting, backlog grooming and communication tasks that collectively took weeks now take hours or minutes.

A photo of Reifers.

“I saw myself and others spending too much of our time on data wrangling and admin tasks when we wanted to be strategizing. Nobody was building what felt truly agentic. So, we did it ourselves.”

Brett Reifers, senior product manager, Microsoft Digital

Employees who never saw themselves as technical are now building sophisticated data visualizations, automating workflows, creating prototypes, and generating learning modules. These were capabilities previously reserved for specialized engineering teams.

The “Forge” is where it’s all happening now.

From a hackathon to a movement

In early 2025, Brett Reifers, a senior product manager in Microsoft Digital, spotted a problem he couldn’t ignore. His peers, smart and driven product managers, kept asking the same question: “How do I use agents for my actual work?”

Beginner tutorials about prompt engineering felt trivial. Advanced agents with tools assumed engineering expertise. The middle ground, where AI meets real jobs, didn’t exist.

“I saw myself and others spending too much of our time on data wrangling and admin tasks when we wanted to be strategizing,” Reifers says. “Nobody was building what felt truly agentic. So, we did it ourselves.”

So, Reifers partnered with colleague Humberto Arias, a senior product manager in Microsoft Digital whose work explores the intersection of AI and productivity. Arias had been independently researching agentic solutions that could click through interfaces, open applications, and complete tasks autonomously.

The insight that unlocked everything came from a deceptively simple observation:

“Everything on the internet is a form—every site, mobile app, every click,” Reifers says. “If agents could fill out my forms in Azure DevOps, they could handle any web-based task.”

They pitched the concept of Copilot fulfilling form-based processes as an entry for Microsoft’s annual hackathon to Sean MacDonald, partner director of product management in Microsoft Employee Experience. MacDonald immediately recognized its potential.

“My reaction was simply, ‘This sounds amazing,’” MacDonald says. “This solution was exactly what we needed.”

The event proved agents could automate PM workflows: managing Azure DevOps items, generating summaries, and querying data systems. After the hackathon validated the concept, Arias suggested pushing the project to GitHub for wider exposure. Reifers then used GitHub Copilot itself, recursively using the very tools they were building, to open source the first Frontier Forge repository in 15 minutes.

A pro-code environment with natural language accessibility

The Forge combines GitHub Copilot, Visual Studio Code (VS Code), and MCPs into a framework that makes professional development tools easily accessible to non-engineers.

A photo of MacDonald.

“The Frontier Forge is a place where you can learn regardless of your skill level. You can adopt what’s out there, even if you don’t know where to start.”

Sean MacDonald, partner director of product management, Microsoft Employee Experience

The core idea: Give employees a workspace seeded with community-created templates, learning modules, and custom agents tailored to Microsoft Digital contexts. Then let them build from there.

For MacDonald, the Forge has proven to be an accessible entry point for almost anyone, regardless of experience.

“The Frontier Forge is a place where you can learn regardless of your skill level,” MacDonald says. “You can adopt what’s out there, even if you don’t know where to start.”

Screenshot showing GitHub Copilot connecting with VS Code.
GitHub Copilot connects chat to VS Code’s built-in and MCP tool capabilities. The custom agents and skills in the workspace can all benefit from contextual access to the right tools for the right job.

An architecture for context-first AI

The technical architecture of The Frontier Forge leverages three layers simultaneously:

  • VS Code provides the enterprise managed workspace where everything happens.
  • GitHub Copilot offers chat functionality and AI assistance, with access to multiple models including Claude, GPT, and Gemini.
  • Tools like Model Context Protocols (MCPs) act as standardized connectors that let agents access tools, data, and services locally. This unlocked what Copilot could decide and do with user approval.
A photo of Arias.

“With GitHub Copilot and MCPs, there are literally no boundaries. It’s hard to explain just how transformational this can be for a product manager. Whatever you ask is transformed into code with a purpose, allowing you to do something you couldn’t before.”

Humberto Arias, senior product manager, Microsoft Digital

The MCPs connect to services like Azure DevOps (for roadmap planning and backlog management), Microsoft Documentation, Figma (for design work), and dozens of other platforms that are essential to product manager workflows. New MCPs appear daily, expanding capabilities organically as the community builds them.

Employees can even ask GitHub Copilot to build custom MCPs for services lacking official integrations. When Arias needed a PowerPoint creator that didn’t exist, he asked GitHub Copilot to create one.

“With GitHub Copilot and MCPs, there are literally no boundaries,” Arias says. “It’s hard to explain just how transformational this can be for a product manager. Whatever you ask is transformed into code with a purpose, allowing you to do something you couldn’t before.”

The shift from prompt engineering towards context engineering is another reason why the Forge works. Its workspace settings, agent instructions, skills and hooks provide a harness with guardrails that help colleagues adopt and use this.

The Forge provides a curated starting point: Microsoft Digital-specific templates, governance frameworks, security guidelines grounded in Microsoft’s Responsible AI framework, and working examples employees can immediately use and modify.

Transformational impact

The productivity gains generated by The Frontier Forge are very real. Our employees report saving weeks or even months on certain projects, especially those that previously required extensive manual work or specialized technical skills.

Case in point: Laura Oxford, a senior content program manager in Microsoft Digital, had four years’ worth of Excel files and communication metrics reports. She had always intended to use the data to create marketing forecasts, but she could never find the necessary time or resources to perform the analysis.

A photo of Oxford.

“The key to creating the agent was going deep into the context. It was an iterative conversation, going back and forth to fine-tune the agent until I was consistently getting the output I wanted. But it truly was just a conversation—no tech skills needed.”

Laura Oxford, senior content program manager, Microsoft Digital

Through iterative, conversation-based prompting, Oxford’s agent analyzed patterns, created projections, and produced visualizations. Oxford now has a robust historical analysis that enables prediction of future campaign performance.

“The key to creating the agent was going deep into the context,” Oxford says. “It was an iterative conversation, going back and forth to fine-tune the agent until I was consistently getting the output I wanted. But it truly was just a conversation—no tech skills needed.”

Drafting clear, executive-ready communications for complex initiatives was what brought Mark Stratford, a senior product manager with the email and calendaring service team in Microsoft Digital, to the Forge.

Before the Forge, communicating status updates to leadership meant he had to manually synthesize data from CSVs, track several approval chains at once—often in messy emails—and iterate on visualizations for what seemed like days and days.

Put more succinctly, these tasks are time-consuming chores that are perfect for AI.

“The Forge’s architecture changes how you think about the problem,” Stratford says. “Instead of iterating on prompts, you declare intent and desired outcome. The Forge’s architecture handles the rest.”

Using this pattern, Stratford created:

  • Over a dozen interactive dashboards for portfolio roadmaps, migration tracking, and service health monitoring.
  • Approval matrix visualizations mapping multi-stakeholder sign-off dependencies.
  • Data analysis pipelines transforming raw telemetry into executive-ready narratives.
A photo of Stratford.

“I didn’t need to fight ambiguity or handhold the model. The architecture gave the agent a stable, skills-driven foundation from the start, which dramatically accelerated development time and improved clarity.”

Mark Stratford, senior product manager, Microsoft Digital

The Forge’s clean separation between intent, constraints, tools, and data inputs eliminated the prompt-tuning loop. Stratford mapped his objectives into the agent framework once, relying on built-in structure and guardrails.

His analysis and drafting time dropped from days to minutes. Outputs like roadmaps and data visualizations went directly into decision workflows with no manual cleanup required.

“I didn’t need to fight ambiguity or handhold the model,” Stratford says. “The architecture gave the agent a stable, skills-driven foundation from the start, which dramatically accelerated development time and improved clarity.”

Building community and sharing knowledge

A simple continuously improving repository has grown into something larger: a community of nearly 100 enthusiasts. Contributors are building templates, learning modules, and specialized MCPs tailored to their job functions. Teams are sharing wins and unlocked achievements.

“At its core, The Frontier Forge is an open-source, community‑driven experience. It’s a safer environment that will help people learn and apply Microsoft’s AI at work.”

Brett Reifers, senior product manager, Microsoft Digital

The Forge succeeds because of its emphasis on community and knowledge sharing. Its GitHub repository serves as collaborative workspace where employees contribute agents, templates, and learning resources.

This sharing culture creates a compounding cycle. One employee’s outcome becomes another’s starting point. Contributors share useful agents immediately, without lengthy approvals. This grassroots approach lets innovation spread at the pace of curiosity.

“At its core, The Frontier Forge is an open-source, community‑driven experience,” Reifers says. “The Forge is a safer environment that will help people learn and apply Microsoft’s AI at work.”

Building a safe-to-fail path

For IT leaders looking to replicate something like the Forge, MacDonald’s guidance starts with reframing the challenge.

“Find the people who are super curious and who want to learn. They will be the ones who drive innovation with AI agents and other newly developed tools.”

Sean MacDonald, partner director of product management, Microsoft Employee Experience

The barrier to agent adoption for non-engineering roles isn’t access to tools. It’s all about giving them the confidence needed to build them and then put them to work. Providing a safe, hands-on environment where people can learn at their own pace, regardless of skill level, has been an essential key to success.

Another key has been to empower the people in your organization who are eager to innovate and try new things. The Forge began with two curious product managers who decided to experiment and then shared their idea with peers.

“Find the people who are super curious and who want to learn,” MacDonald says. “They will be the ones who drive innovation with AI agents and other newly developed tools.”

For IT leaders currently trying to prepare their organizations for an AI-driven future, the story shows that the answer isn’t to wait around for perfect tools or comprehensive employee training.

“The leaders that create safe spaces for non-engineers to build with AI now will compound that advantage for years,” Reifers says. “The ones that wait will spend 2027 trying to catch-up.”

Our knowledge workers don’t need to wait for help any longer, now they can forge their own path with an agent or other AI tool they build themselves.

Key takeaways

Here are some insights your leaders can use to build grassroots-led, AI-forward communities in your organization:

  • Start with volunteers, not mandates. The Forge grew to 100 contributors with zero top-down requirements. Organic growth from curious employees creates sustainable adoption.
  • Highlight your quick wins. Reifers’ and Arias’ live demos of MCPs, Oxford’s 90-minute forecast and Stratford’s 20-minute drafts became the recruiting pitch for the next wave of adopters. Show your people results like these, then hand them the tools.
  • Lower barriers without lowering standards. Accessibility and quality aren’t mutually exclusive. Governance and security are non-negotiable. Configure it all into the harness.
  • Prioritize knowledge sharing and attribution. When one person solves a problem and shares it, dozens benefit immediately. Reward provenance.
  • Ship fast, improve later. The Forge repo was built in 15 minutes. Four months later, it contained 50+ templates and agents. As much of 80% what is produced in the Forge is rewritten every other week as tools evolve. Ship MVPs and evolve based on real usage.
  • Reframe outcomes > tools. Shifting from “developer tool” to “Copilot workspace” helps knowledge workers see they belong.

The post The Frontier Firm: How knowledge workers are forging their own AI tools at Microsoft appeared first on Inside Track Blog.

]]>
22549
Read our seven tips for shifting to a ‘cloud native’ device management strategy http://approjects.co.za/?big=insidetrack/blog/read-our-seven-tips-for-shifting-to-a-cloud-native-device-management-strategy/ Thu, 19 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22433 At Microsoft, we manage a large, diverse device estate, with more than 1 million devices in use by employees and teams across our global corporate network. For years, we stitched together insights across multiple tools, wrote custom queries, and maintained fragile reports just to answer basic questions. This approach slowed investigations and delayed patch targeting. […]

The post Read our seven tips for shifting to a ‘cloud native’ device management strategy appeared first on Inside Track Blog.

]]>
At Microsoft, we manage a large, diverse device estate, with more than 1 million devices in use by employees and teams across our global corporate network.

For years, we stitched together insights across multiple tools, wrote custom queries, and maintained fragile reports just to answer basic questions. This approach slowed investigations and delayed patch targeting.

We needed a faster, stronger, cloud-native path.

We’re investing in AI-powered predictive maintenance and intelligent troubleshooting to reduce friction in device management.”

Daniel Manalo, principal service engineer, Microsoft Digital

The advent of generative AI changed the way we manage our devices. Not only were we able to ask better questions and get targeted help right from the start, we also got faster and more relevant answers from across our entire device management estate.

It’s simpler. It’s faster. It scales with our environment. And we’re doing it natively in the cloud.

“We’re investing in AI-powered predictive maintenance and intelligent troubleshooting to reduce friction in device management,” says Daniel Manalo, a principal service engineer in Microsoft Digital, the company’s IT organization.

AI and machine learning help us find errors faster and fix them autonomously, in many cases. It reduces our downtime, prolongs lifespans of our devices, and ensures our employees have a consistent and productive experience with their devices.

Today, we’re applying this approach to everyday operations: Speeding investigations, simplifying updates, and tightening the loop from detection to remediation. The overarching goal remains consistent—reduce workloads, improve clarity, and move our discoveries to earlier in the risk window.

The role of Customer Zero in evolving modern device management

We serve as the company’s Customer Zero for our products here in Microsoft Digital. We run early capabilities in our own tenant, pressure‑test them at Microsoft scale, and feed what we learn straight back to engineering. The goal is simple: Turn good ideas into reliable features that any enterprise can use.

A photo of Selvaraj.

“We use our collective learnings from our internal deployments to improve our products, which makes them better for our employees and for our customers.”

 Senthil Selvaraj, principal group product manager, Microsoft Digital

Our Microsoft Digital teams work side-by-side with the Intune product group to modernize our device management approach. The Intune group builds and operates the platform, while we bring real‑world scenarios, signals, and guardrails. Together, we help develop, test, and deploy a better cloud-native product for our customers.

“We use our collective learnings from our internal deployments to improve our products, which makes them better for our employees and for our customers,” says Senthil Selvaraj, a principal group product manager in Microsoft Digital.

For the same reasons, we work hard to make sure that we deploy our tools and services in the same way our customers do.

“That enables everyone at the company to have good visibility into the experiences our customers will have when our products get to them,” Selvaraj says. “This makes us more accountable to our customers and helps us move quickly when improvements are needed.”

Customer Zero for device management spans more than Intune.

We partner across teams responsible for Microsoft Purview, Microsoft 365 Copilot, Microsoft Defender, Windows (Autopatch and Hotpatch), GitHub, and Microsoft Azure to produce comprehensive device management capabilities. These are the surfaces where we test, learn, and refine the end‑to‑end device management experience.

The loop is tight. We identify a need, prototype a solution with the product groups, roll it out to targeted rings, measure impact, and iterate. Those learnings inform what ships in Intune—from data-driven insights to built‑in prompts that surface device health data as a conversation, rather than a simple query.

“Using natural language reduces the time it takes us to figure out what’s going on. We are able to ask Security Copilot questions naturally, which allows us to hear the signals that need our immediate action faster.”

Mohit Malhotra, product manager, Microsoft Digital

The result is a safer, faster path to value with AI-driven device management, including clear ownership, faster remediation, and features that arrive tested against operational reality.

We’ve learned a lot as Customer Zero, and we’re passing those lessons on to you.

Modern device management: Seven tips

Here are seven important tips that we’ve compiled to help with your device management efforts.

Tip 1: Ask natural-language questions with Microsoft Security Copilot

We use the generative AI capabilities in Microsoft Security Copilot to query device and vulnerability data in plain language and get a unified answer that we can act on.

This allowed us to replace bespoke reports with targeted questions.

“Using natural language reduces the time it takes us to figure out what’s going on,” says Mohit Malhotra, a product manager in Microsoft Digital. “We are able to ask Security Copilot questions naturally, which allows us to hear the signals that need our immediate action faster.”

Security Copilot lets us ask about device posture, app versions, cybersecurity vulnerabilities (known as Common Vulnerabilities and Exposures, or CVEs), and exposure across Microsoft Defender and Intune, without stitching the data together by hand. We get the context we need and move faster from finding to fixing.

How we use it

  • Scope impact: “List Windows devices running <app/version> that are vulnerable, with owners and deployment rings.”
  • Prioritize work: “Group affected devices by business unit and model; show counts and severity.”
  • Verify reach: “Confirm which devices received <policy/package> in the last 48 hours; flag failures.”

Prompts we rely on

  • “Show devices affected by <CVE/app version> and summarize recommended remediation steps.”
  • “Break down exposure by ring and list top 5 models with highest risk.”
  • “Identify outliers that failed the last policy sync and provide reasons.”

Why it helps

  • Less toil: No custom pipelines to maintain.
  • Faster triage: Discovery and scoping happen in one interaction.
  • Clear next steps: Results align to our Intune targeting and scheduling paths.

Best practices

  • Start specific: Name the product, version, and time window, then broaden as needed.
  • Keep follow‑ups short: Quick pivots like “group by region” or “add owner emails” maintain momentum.
  • Act on the output: Use the device lists to target updates or policies in Intune, then validate results with a final check.

Note

  • We align usage with least‑privilege access and established approval paths so insights come from authoritative sources and actions land through the right channel.

Tip 2: Find knowledge fast with Microsoft 365 Copilot

We use Microsoft 365 Copilot to pull device context from email, chats, and documents, allowing us to troubleshoot issues faster and easier using generative AI.

Incidents start with questions, not dashboards, e.g. “Who owns this package? When did we change that policy? Where did we discuss the driver rollback?”

The answers to those questions live in mail threads, Teams chats, and planning docs. Before Copilot, we were forced to sift through these materials manually, which cost us time. Now we ask one question and get a summary with sources, people, and links. That keeps the investigation moving and reduces handoffs.

A photo of Griswold.

“Copilot helps scan noisy logs and points us to likely causes. Our old process of opening logs, interpreting opaque error strings, and validating a hunch took too long. Getting faster answers matters when incidents stack up.”

Michael Griswold, principal service engineering manager, Microsoft Intune

This also helps us during the coordination phase. We can surface the approver for a change, the engineer who ran the last mitigation, and the runbook section that explains the rollback steps. We make better decisions because we see the history and the intent, not just the current state. Then we line up the action in Intune with the right stakeholders already looped in.

How we use it

  • Asking for recent context on a device model, configuration, or app to see decisions and outcomes in one place.
  • Retrieving owners, approvers, and on‑call contacts named in Outlook and Teams messages related to the issue.
  • Pulling change notes and runbook updates tied to a policy or package before we request an update in Intune.

Prompts we rely on

  • “Summarize recent emails and Teams messages about <device model/app version> and list owners mentioned.”
  • “Find the change note or runbook update for <policy/package> from the last 14 days.”
  • “Show known issues linked to <KB/app> and who resolved the last occurrence.”

Why it helps

  • Less hunting: We replace ad hoc inbox and wiki searches with a single query.
  • Faster coordination: We identify the right stakeholders and prior decisions immediately.
  • Better decisions: We confirm history and context before proposing changes in Intune.

Best practices

  • Keep prompts scoped. Include product, version, and a timeframe to focus your results.
  • Respect boundaries. Align usage with least‑privilege access and existing approval and auditing paths.
  • Capture outcomes. Link summaries, owners, and key docs back to the incident record so future searches return richer context.

Note

  • Copilot gets better as more decisions and runbooks live in Microsoft 365, since that’s where the signals come from.

Tip 3: Accelerate log triage with GitHub Copilot, Visual Studio Code, and Log Analytics

We use GitHub Copilot in Visual Studio Code with Azure Monitor Log Analytics to explain errors, draft KQL, and shorten device log investigations.

“Copilot helps scan noisy logs and points us to likely causes,” says Michael Griswold, a principal service engineering manager with the Microsoft Intune product group. “Our old process of opening logs, interpreting opaque error strings, and validating a hunch took too long. Getting faster answers matters when incidents stack up.”

Now we keep the entire loop in one workspace. AI in GitHub Copilot interprets the event, proposes likely causes, and generates KQL to confirm or rule out scenarios. We move from symptom to validated pattern without bouncing across tools.

How we use it

  • Connect VS Code to your Log Analytics workspace and load the tables you need (e.g., inventory and update events).
  • Paste a minimal log sample with timestamps and device identifiers, so Copilot has context.
  • Ask Copilot to summarize the error, suggest probable causes, and produce KQL to test each path.
  • Run the query, review clusters and outliers, and request an alternate query or grouping if noise is high.

Prompts we rely on

  • “Explain this error in a device‑management context and list three validation checks.”
  • “Write KQL to find matching failures in the last 24 hours and group by model and policy.”
  • “Join device inventory with update events for device and surface anomalies.”

Why it helps

  • Faster pattern recognition: Proposed queries get us to evidence quickly.
  • Less context switching: Analysis and validation happen inside VS Code.
  • Cleaner handoff: Results map to our Intune actions for targeted remediation.

Best practices

  • Keep inputs tight: Provide a small, representative log snippet, the affected device attributes, and a precise time window.
  • Iterate on queries: Ask for different filters, joins, or time ranges when results are noisy.
  • Close the loop: Use the device list to drive policy or update changes in Intune and confirm fixes with a final query.

Note

  • This workflow is broadly repeatable with GitHub Copilot, Visual Studio Code, and Azure Monitor Log Analytics.

Tip 4: Keep firmware and drivers current with Intune update management

We use Intune firmware and driver update management to identify, approve, and deploy our OEM updates at scale.

“Staying current on firmware and drivers keeps devices stable and secure. With Intune, we stage updates, watch the rollout, and adjust before issues spread.”

Taqui Mohammad, senior service engineer, Microsoft Digital

Firmware and driver releases don’t land on a predictable schedule. Different vendors ship on different timelines, and a single environment can span hundreds of models.

Tracking this manually slows responses and leaves risk on the table. Intune centralizes the view so we can see what’s applicable, choose the right targets, and roll out updates with the same discipline we use for OS patches.

“Staying current on firmware and drivers keeps devices stable and secure,” says Taqui Mohammad, a senior service engineer in Microsoft Digital. “With Intune, we stage updates, watch the rollout, and adjust before issues spread.”

How we use it

  • Review applicability: Open the firmware and driver updates view to see available updates grouped by make and model.
  • Select a pilot: Target a small ring first (model, business unit, or region) and set short deadlines.
  • Plan time windows and restarts: Align deployments with maintenance windows and communicate expected reboots.
  • Monitor, then expand: Track success and failure signals, remediate issues, and scale to broader rings.

Configuration tips

  • Standardize categories: Separate firmware from drivers in policies so reporting and rollbacks are clean.
  • Use device tags consistently: Model, region, and business unit tags make scoping and expansion straightforward.
  • Define rollback steps: Document how to revert a driver or hold firmware for a specific model when needed.

Success checks

  • Compliance trend: Increased percentage of devices on the latest approved firmware and driver versions after each wave.
  • Incident correlation: Fewer support tickets related to device stability and peripherals on updated models.
  • Deployment reliability: Decreased failure rates as pilots catch issues before broad rollout.

Best practices

  • Pair with risk signals: Prioritize models tied to active vulnerabilities or incident clusters before broad rollout.
  • Keep rings small and fast: Validate quickly, then scale; long pilots hide issues and delay benefits.
  • Document exceptions: If a model needs a temporary hold due to app or peripheral compatibility, record the reason and set a review date.
  • Verify outcomes: Confirm update levels on target devices and scan for regressions in support queues.

Notes

  • Expect uneven arrival patterns across vendors and models; a weekly review cadence helps catch new updates without creating noise.
  • Treat firmware and drivers as first‑class updates; include them in regular compliance reports and reviews so they get consistent attention.
A photo of Rodriguez.

“Autopatch Update Readiness catches and resolves common blockers before deployment begins. What used to require manual checks and troubleshooting is now handled upfront, giving us smoother updates and a far more reliable experience for our employees.”

Dave Rodriguez, principal product manager, Microsoft Digital

Tip 5: Speed updates with Windows Autopatch, Hotpatch, and Auto Remediation Update Readiness

We use Windows Autopatch and Hotpatch to reduce disruptions and keep our devices current, and we pair them with automated readiness and remediation so our changes land safely and quickly.

Autopatch handles orchestration for quality updates and feature releases. We define rings that reflect business risk and user impact, then let the service pace deployments as health signals arrive.

“Autopatch Update Readiness catches and resolves common blockers before deployment begins,” says Dave Rodriguez, a principal product manager in Microsoft Digital. “What used to require manual checks and troubleshooting is now handled upfront, giving us smoother updates and a far more reliable experience for our employees.”

Where Hotpatch is available, we apply security updates without a reboot, which cuts downtime and helps us move faster on critical fixes. An automated readiness layer checks prerequisites, fixes common blockers, and confirms that devices are ready before rollout.

How we use it

  • Enroll eligible devices in Autopatch and map them to the right scope so ownership, reporting, and break‑glass procedures are clear.
  • Build rings that reflect business priority and user profiles (e.g., VIP laptops, frontline kiosks, engineering workstations, and lab devices).
  • Enable Hotpatch on supported SKUs and confirm policy alignment so security updates apply without restarts where possible.
  • Run readiness checks that verify update agent health, policy state, storage and battery requirements, VPN reachability, and available maintenance windows.
  • Auto‑remediate common blockers such as stale update caches, missing prerequisites, paused services, or conflicting policies before a device enters the next ring.
  • Start with small cohorts, monitor early signals like install rate and post‑update stability, validate rollback paths, then expand the scope deliberately.

Operational checks

  • Ring coverage ensures eligible devices are actually assigned to a ring and not stranded outside the managed flow.
  • App and driver smoke tests validate business‑critical apps, kernel drivers, and peripherals on pilot cohorts before broad rollout.
  • Safeguard holds and known‑issue tracking are able to watch for vendor or service flags, which can pause or throttle a ring until a fix is available.
  • Rollback readiness confirms who owns the decision, what steps they follow, and how telemetry proves the rollback succeeded on affected devices.

Why it helps

  • Continuous movement shortens exposure windows because healthy rings advance without waiting for a fixed date.
  • Fewer interruptions improve user experience, as Hotpatch removes the need for restarts on supported devices.
  • Higher success rates come from automated readiness and remediation, removing predictable failures before deployment.

Best practices

  • Use consistent device tags so rings map cleanly to models, regions, and business units, which keeps targeting and reporting trustworthy.
  • Keep pilots small and fast to find issues quickly, then scale once success criteria are met and rollback is validated.
  • Communicate maintenance expectations in plain language so users know timing, restart behavior, and how to report problems.
  • Pace by risk rather than calendar, advancing rings when health metrics and support signal quality are within thresholds.
  • Review deployment dashboards daily during rollout, adjust ring size or cadence when error rates rise, and capture lessons learned for the next wave.

Note

  • Hotpatch availability depends on your Windows edition and configuration, so confirm support and prerequisites as part of your scoping work.

Tip 6: Keep third‑party apps current with Intune Enterprise App Management

We use Intune Enterprise App Management to keep third‑party apps current without constant packaging work.

A photo of Arias.

“Third-party apps fall out of date fast, so we’re standardizing how they’re updated. We do that with Enterprise App Management, which gives us reliable packages and keeps us moving at a steady cadence.”

Humberto Arias, senior product manager, Microsoft Digital

Third‑party software drives real risk: version drift, silent installers change, and manual packaging pipelines break at the worst time.

With Enterprise App Management, we select from a managed catalog, set assignment and update rules, and let the service handle new versions as they ship. We spend our time on exceptions, not routine updates.

“Third-party apps fall out of date fast, so we’re standardizing how they’re updated,” says Humberto Arias, a senior product manager in Microsoft Digital. “We do that with Enterprise App Management, which gives us reliable packages and keeps us moving at a steady cadence.”

This approach also improves the user experience. Updates arrive in predictable windows and dependencies are handled in a timely manner. We avoid surprise prompts and failed installs that generate tickets. When we do need to pause or pin a version, we scope it cleanly and document the reason.

How we use it

  • Build a standard catalog that covers the common apps our users need and assign clear ownership for each title.
  • Configure update behavior to auto‑update.
  • Use rollout rings so pilots validate the installation success rate and app behavior before expanding to broad audiences.
  • Scope assignments with device tags such as model, region, or business unit to simplify targeting and reporting.
  • Monitor install and update status, investigate failures, and retry with adjusted timing or requirements when needed.
  • Capture exceptions for apps that need holds or custom steps and set review dates to revisit the decision.

Scenarios we run

  • Rapid response when a high‑risk CVE drops by prioritizing affected apps and moving them to the front of the update queue.
  • Version cleanup by removing outdated or duplicate installers so devices converge on a single approved release.
  • Conditional deployment for specialized teams by offering an app as available instead of required while still tracking adoption.

Why it helps

  • Less packaging toil because the catalog supplies current installers and metadata.
  • Faster patching for common apps because updates flow as they publish.
  • Better compliance reporting because versions and assignments are consistent across rings and groups.

Best practices

  • Keep an authoritative list of approved apps with owners, support notes, and rollback steps.
  • Coordinate maintenance windows for high‑impact apps so users can save work before enforced updates.
  • Require pilots for any app with add‑ins or drivers and validate workflows with real users before scaling.
  • Use uninstall assignments to remove unapproved or vulnerable software and block reinstallation where needed.
  • Document app‑level exceptions, including the rationale and a date to re‑evaluate.

Notes

  • Some apps need pre-install checks or post-install steps, so include scripts or detection rules where required.
  • Track license terms and usage for commercial titles so updates do not outpace entitlements.

Tip 7: Close the loop with Defender Vulnerability Management and Intune security tasks

We use Microsoft Defender Vulnerability Management with Intune to turn exposure insights into targeted actions that close risk fast.

“The Intune Vulnerability Agent gives us a clear list of issues by device and owner. It shortens our path from finding a problem to fixing it.”

Harshitha Digumarthi, senior product manager, Microsoft Digital

Incidents don’t end when we spot a CVE. They end when devices are fixed and verified.

Vulnerability Management gives us an AI-powered live inventory of devices, software, and configurations, then connects that inventory to known threats. It shows which versions run where, highlights misconfigurations, and explains why a device is at risk. We see the problem and the cause, not just a risk score.

“The Intune Vulnerability Agent gives us a clear list of issues by device and owner,” says Harshitha Digumarthi, a senior product manager at Microsoft Digital. “It shortens our path from finding a problem to fixing it.”

It also ranks what to fix first. Factors like severity level, exploit availability, active attacks, and business context all feed into the priority list, so that commensurate effort goes where it’s needed most. The service recommends specific actions such as updating, uninstalling, reconfiguring, or applying a policy as appropriate.

From there, it pushes the work into our change tools. Tasks flow to Intune, Autopatch, and Enterprise App Management so the remediation is traceable. Exceptions are tracked, including data on owners, compensating controls, and review dates. Closure is verified by watching exposure decrease and confirming the fix landed with the intended devices.

How we use it

  • Review exposure by CVE, software, and device group to see where risk concentrates.
  • Prioritize based on business impact, internet exposure, and privilege level so high‑value targets move first.
  • Select the fix that fits the issue, including app updates through Enterprise App Management, OS and quality updates through Autopatch or Hotpatch (where supported), firmware and drivers through Intune update management, or policy changes for configuration weaknesses.
  • Target the right scope using tags for model, region, and business unit so remediation lands where it’s needed.
  • Set deadlines and user experience settings that balance urgency with productivity.
  • Validate closure by rechecking exposure, confirming install success, and watching support signals for regressions.

What we monitor

  • Exposure trends over time, to prove that remediation is reducing risk.
  • Top vulnerable apps and models, so effort tracks where it matters most.
  • Noncompliant devices and owners, so follow‑ups are direct and accountable.
  • Exceptions that need compensating controls, documented rationale, and a review date.

Why it helps

  • Fewer handoffs because the same team that sees risk can initiate remediation.
  • Measurable outcomes because exposure and deployment data live in connected systems.
  • Consistent execution because rings, tags, and approvals follow the same patterns as other updates.

Best practices

  • Keep device tags authoritative so targeting and reporting stay reliable.
  • Use pilots even for urgent fixes to catch compatibility issues before broad rollout.
  • Link vulnerability records to Intune assignments so audit and learning loops are clear.
  • Communicate clearly with affected users about timing, restarts, and how to report problems.
  • Document exceptions with owners and expiration dates so temporary holds don’t become permanent.

Notes

  • Not every fix is an update, and some issues require a configuration change or feature disablement with clear rollback steps.
  • Least‑privilege access and standard approvals keep remediation fast without expanding risk.

Key takeaways

Our approach for managing devices and updates has changed. We shifted device and update management from manual hunting and ad hoc remediation to a connected loop that starts with a question and ends with verified resolution—reducing investigation time and speeding recovery.

A few lessons stand out:

  • Make natural language work by grounding it in trust. Natural language becomes a force multiplier when insights are drawn from authoritative data and access is tightly scoped.
  • Keep pilots small, fast, and intentional. Focused pilots surface issues early without slowing momentum or introducing unnecessary risk.
  • Standardize signals to build confidence. Consistent tagging and clear ownership make reports, deployment rings, and rollbacks easier to interpret and trust.
  • Control exceptions with discipline. Every exception requires a written rationale and a review date, ensuring temporary holds don’t become permanent policy.
  • Close the loop—every time. Verification matters as much as detection. We confirm outcomes and capture learnings to continuously improve the next cycle.

What we’re improving next:

  • Strengthen question‑to‑action flows. We’re deepening prompts and playbooks that connect Security Copilot and Intune so operators can move from investigation to scoped change in a single flow.
  • Expand Hotpatch adoption and measurement. As support broadens, we’re increasing usage and measuring the impact on downtime, reliability, and user experience.
  • Grow app coverage with clearer stability rules. We’re expanding Enterprise App Management while enforcing stronger version‑pinning guidance where predictability is critical.
  • Automate deployment decisions. Additional automation around ring placement, readiness checks, and rollback triggers will allow deployments to adapt to live health signals.
  • Accelerate investigations with reusable telemetry. We’re developing richer telemetry patterns and reusable KQL in Visual Studio Code to reduce noise and speed repeat investigations.

It’s a continuing evolution of our awareness and capabilities in device management, and we’ll keep improving on it, one loop at a time.

The post Read our seven tips for shifting to a ‘cloud native’ device management strategy appeared first on Inside Track Blog.

]]>
22433
Protecting AI conversations at Microsoft with Model Context Protocol security and governance http://approjects.co.za/?big=insidetrack/blog/protecting-ai-conversations-at-microsoft-with-model-context-protocol-security-and-governance/ Thu, 12 Feb 2026 17:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22324 When we gave our Microsoft 365 Copilot agents a simple way to connect to tools and data with Model Context Protocol (MCP), the work spoke for itself. Answers got sharper. Delivery sped up. New patterns of development emerged across teams working with Copilot agents. That ease of communication, however, comes with a responsibility: Protect the […]

The post Protecting AI conversations at Microsoft with Model Context Protocol security and governance appeared first on Inside Track Blog.

]]>
When we gave our Microsoft 365 Copilot agents a simple way to connect to tools and data with Model Context Protocol (MCP), the work spoke for itself.

Answers got sharper. Delivery sped up. New patterns of development emerged across teams working with Copilot agents.

That ease of communication, however, comes with a responsibility: Protect the conversation.

Questions came up like, who’s allowed to speak? What can they say? And what should never leave the room?

Microsoft Digital, the company’s IT organization, and the Chief Information Security Officer (CISO) team, our internal security organization, are leaning on those questions to help us shape our strategy and tooling around MCP internally at Microsoft.

A photo of Kumar.

“With MCP, the problem is not the inherent design; it’s that every improper server implementation becomes a potential vulnerability. Even one misconfigured server can give the AI the keys to your data.”

Swetha Kumar, security assurance engineer, Microsoft CISO

Our approach is intentionally straightforward.

Start secure by default. Use trusted servers. Keep a living catalog so we always know which voices are in the room. Shape how agents communicate by requiring consent before making changes.

We minimize what’s shared outside our walls, watch for drift, and act when something looks off. Our goal is practical governance that lets builders move fast while keeping our data safe.

That’s the risk we design for, and it’s why our controls prioritize clear ownership, simple choices, and visible guardrails.

“With MCP, the problem is not the inherent design; it’s that every improper server implementation becomes a potential vulnerability,” says Swetha Kumar, a security assurance engineer in the Microsoft CISO organization. “Even one misconfigured server can give the AI the keys to your data.”

Understanding MCP and the need for security

MCP is a simple standard that lets AI systems “talk” to the right tools and data without custom integration work. Think of it like USB‑C for AI. Instead of building a new connection every time, teams plug into a common pattern. That standardization delivers speed and flexibility—but it also changes the security equation.

Before MCP, every integration was its own isolated conversation.

“Now, one pattern can unlock many systems,” Kumar says. “It’s a win and a risk. When AI can reach more systems with less effort, we must be precise about who’s allowed to speak, what they can say, and how much gets shared.”

We frame this as communications security.

The question isn’t just, “Is this API secure?” It’s “Is this a conversation we trust?” We want to know which servers are in the room, what actions they’re permitted to take, and how we’ll notice if something changes. At the same time, we keep the cognitive load low for builders. They choose from trusted options, see clear prompts before an agent makes edits, and move on. Simple choices lead to safer outcomes.

“MCP enables granular control over the tools and resources exposed to the Large Language Model,” Kumar says. “But that means the developer is responsible for configuring it correctly—which tools an agent can see, what actions a server can take, and what context is shared.”

This approach helps both sides.

Product teams get a consistent way to extend their agents while security teams get consistent places to add guardrails—at discovery, access, and throughout the flow of requests and responses. Everyone operates from the same playbook.

When we treat MCP this way, we protect the conversation without slowing it down. We know who’s speaking. We know what they can do. And we can prove it.

Assessing MCP security across four layers

Every MCP session creates a conversation graph. An agent discovers a server, ingests its tool descriptions, adds credentials and context, and starts sending requests. Each step—metadata, identity, content, and code—introduces potential risk.

We evaluate those risks across four layers so we can catch failures early, contain blast radius, and keep conversations in bounds.

However, the big picture is just as important as the details.

“We take a holistic view of MCP security: start with the ecosystem, then specify controls across the four layers,” Kumar says. “The layers make the work concrete, but the goal stays the same—unified governance, shared education, and faster detect-and-mitigate when a server is at risk.”

Applications and agents layer

This is where user intent meets execution. Agents parse prompts, discover tools, select actions, and request changes. MCP clients live here, deciding which servers to trust and when to ask for user consent.

  • What can go wrong
    • Tool poisoning or shadowing. A server advertises safe‑looking actions but performs something else.
    • Silent swaps. A tool’s metadata changes and the client keeps trusting an altered “voice.”
    • No sandbox. The agent can request edits or run code without strong guardrails.
  • What we watch for
    • Unexpected tool descriptions or capabilities at connect time.
    • Edit attempts on critical resources without explicit user consent.
    • Abnormal tool‑selection patterns across sessions.

AI platform layer

The AI platform layer includes the AI models and runtimes that interpret prompts and call tools, along with orchestration logic and safety features.

  • What can go wrong
    • Model supply‑chain drift. Unvetted models, unsafe updates, or compromised fine‑tunes change behavior.
    • Prompt injection via tool text. Descriptions and responses steer the model toward unsafe actions.
  • What we watch for
    • Model provenance and update cadence tied to agent behavior changes.
    • Signals of jailbreaks or instruction overrides in prompts and intermediate messages.
    • Output drift linked to specific tools or servers.

Data layer

This layer covers business data, files, and secrets the conversation can touch.

  • What can go wrong
    • Context oversharing. Session data, files, or secrets get packed into the model’s context and leak to a third‑party server.
    • Over‑scoped credentials. Long‑lived tokens, broad scopes, or wrong audience claims enable lateral movement.
  • What we watch for
    • Size and sensitivity of context passed to tools.
    • Token hygiene, including short lifetimes, least‑privilege scopes, and correct audience claims.
    • Data egress patterns that don’t match a tool’s declared purpose.

Infrastructure layer

The infrastructure layer includes compute, network, and runtime environments.

  • What can go wrong
    • Local servers with too much reach. Excessive access to environment variables, file systems, or system processes.
    • Cloud endpoints without a gateway. No TLS enforcement, rate limiting, or centralized logging.
    • Open egress. Servers call out to the internet where they shouldn’t.
  • What we watch for
    • All remote MCP servers registered behind the API gateway.
    • Runtime signals, such as authentication failures, burst traffic, or unusual geographies.
    • Network policies that restrict outbound calls to certain targets.

Across all four layers, the throughline is AI communications security. We decide who can speak and verify what was said—and keep listening for change.

Establishing a secure-by-default strategy

We start by closing the front door. We recommend every remote MCP server sits behind our API gateway, giving us a single place to authenticate, authorize, rate‑limit, and log. There are no direct calls and no blind spots.

A photo of Enjeti

“Everything we do starts with securing the MCP server by default and that begins by registering it in API Center for easier discovery. We rely solely on vetted and attested MCP servers, ensuring every call comes from a trusted footprint.”

Prathiba Enjeti, principal PM manager, Microsoft CISO

Next, we decide who gets a voice.

Teams choose from a vetted list of MCP servers. If someone connects to an unapproved endpoint, they receive a friendly nudge and a clear path to register it. No shaming—just fast correction and a better inventory the next time around.

Identity comes next. Servers expect short‑lived, least‑privilege tokens with the right scopes and audience. Admin paths require strong authentication, and where possible, we use proof‑of‑possession to bind tokens to the client and reduce replay risk. Secrets don’t live in code, keys rotate, and audit trails are in place.

“Everything we do starts with making the MCP server secure by default and that begins by registering it in API Center for easier discovery,” says Prathiba Enjeti, a principal product manager in the Microsoft CISO organization. “We only use vetted and attested MCP servers. That’s how we keep the conversation safe without slowing it down.“

On the client side, we slow agents at the right moments. Agents can’t touch high‑risk tools without explicit consent. Tool descriptions are verified on connection and compared to approved contracts. If a tool’s “voice” drifts, we block the call.

We also minimize what’s shared.

Context is trimmed to what the task requires. Sensitive data isn’t included by default, and third‑party servers get only what they need—not the whole transcript. Output filters and prompt shields sit alongside the model to prevent risky inputs from becoming risky actions.

Isolation completes the design. Local servers run in containers with tight file and network permissions. Hosted servers allow only the outbound calls they need, and inbound traffic flows through the gateway, with TLS and logging enforced.

Simple rules with visible guardrails.

“We only use vetted MCP servers,” Enjeti says. “That’s how we keep the conversation safe without slowing it down.”

How we run MCP at scale: architecture, vetting, and inventory

We keep MCP safe by making three things intentionally boring: architecture, vetting, and inventory. One defined path. One vetting flow. One living catalog.

Architecture

We recommend remote MCP servers sit behind an API gateway, giving us a single place to authenticate, authorize, validate, rate‑limit, and log. Transport Layer Security (TLS) is required by default, and for sensitive endpoints, we can require mutual TLS. Outbound egress is pinned to approved destinations using private endpoints and firewall rules, so servers can’t “call anywhere.” Runtime protection continuously watches for credential abuse, injection patterns, burst traffic, and odd geographies.

Identity is established up front. We issue short‑lived, least‑privilege tokens with the correct audience and scopes, and admin paths require strong authentication. Where supported, tokens are bound to the client to reduce replay risk. Services use managed identities or signed credentials; secrets don’t live in code, and keys rotate on schedule.

Model‑side safety travels with every conversation. Content safety and prompt shields help models ignore risky inputs, while orchestration enforces a per‑tool allowlist, so an agent can’t call tools that aren’t in policy—even if the model suggests it. We also track model versions, allowing behavior changes to be correlated with updates.

Clients enforce consent at the edge. “Ask before edits” is enabled by default for write, delete, and configuration changes. When an agent connects, it verifies tool descriptions against the approved contract.

Observability ties it all together. We’re working toward logging tool calls, resource access, and authorization decisions end‑to‑end with correlation IDs. Detections flag abnormal tool selection, unexpected data egress, or edits without consent. Every server has an owner, a contract, and an approval record, and metadata changes automatically trigger re‑review. Kill switches live at both the client and the gateway when we need them.

Vetting

We don’t “connect and hope.”

Before any MCP server can speak in our environment, it earns trust. Owners declare what the server does (tools and actions), what it touches (data categories and exports), how callers authenticate (scopes and audience), and where it runs (runtime and on‑call ownership).

We start with static checks: manifests must match the contract, side‑effecting actions must be consent‑gated, tokens must be short‑lived and properly scoped. A SBOM (Software Bill of Materials) must be present, dependencies must be current, and no credentials can be embedded in code.

Then we test like a client would. We snapshot tool metadata on connect and compare it to the approved contract, probe for prompt‑injection and tool‑poisoning, and verify that “ask before edits” triggers for destructive actions.

We also confirm context minimization, validate that egress is pinned to approved hosts, and test resilience under load, including health checks, retry behavior, and isolation using containers with least‑privilege file and network access. Servers are published only when security, privacy, and responsible AI reviews are complete, runbooks and on‑call are in place, and the registry entry is created and pinned.

Inventory

A photo of Janardhanan

“Inventory is the foundation—if we miss a server, we miss the conversation. Every server, regardless of where it’s running or how it’s deployed, must be accounted for in our system.”

Priya Janardhanan, principal security assurance engineering manager, Microsoft CISO

You can’t govern what you can’t see, and MCP shows up in more places than a single system of record. To solve that, we’re building the map from signals and stitch them into one catalog.

“Inventory is the foundation—if we miss a server, we miss the conversation,” says Priya Janardhanan, a principal security assurance engineering manager at Microsoft CISO Operations. “Every server, regardless of where it’s running or how it’s deployed, must be accounted for in our system. Without a complete inventory, we lose visibility into critical operations, risk exposing sensitive data, and undermine our ability to ensure compliance and security.”

Our goal state is that Endpoint telemetry catches developer‑run servers on laptops and workstations. Repos and CI pipelines reveal intent before anything ships. IDEs (Integrated Development Environments) surface local extensions and configured endpoints. The gateway and our registries anchor what’s approved for business data, while low‑code environments tell us which connectors are in use and where they point.

We normalize and correlate those signals with stable IDs for servers, tools, and owners. Ownership is proven through repositories, gateway services, and environment administrators—on‑call contacts included. Exposure is scored based on data touches, scopes requested, egress rules, and change history, so high‑risk items rise to the top of the queue.

Freshness is tracked with last‑seen timestamps, and stale entries are retired over time. Builders can discover and reuse approved servers; reviewers can see what changed since the last approval, and admins get instant visibility into coverage and hotspots.

We’re working toward automated identification and notification for unknow servers. In the ideal state, a registration stub is created when we detect an unknown server on an endpoint. Then, the likely owner is notified, and direct calls are blocked until the server is vetted through an automated process. If tool metadata changes after approval, high-risk actions are paused and routed for re-review, then auto-resumed once approved.

“It all revolves around inventory as the foundation,” Janardhanan says. “If we miss a server, we miss the conversation.”

A photo of Hasan

“Agent 365 tooling servers will allow centralized governance for IT admins. That means a single pane where they can see what’s approved, who owns it, what data it touches, and then apply policy.”

Aisha Hasan, principal product manager, Microsoft Digital

Architecture gives us stable choke points. Vetting keeps weak servers out. Inventory keeps our map current. It’s a single pattern for builders and a unified playbook for security.

Governing agents in low‑code and pro-code scenarios

Makers move fast—that’s the point. A Customer Support team needed a Copilot action to pull case history, so they opened Copilot Studio, selected an approved MCP connector, and shipped a first version before lunch. No tickets. No detours. Governance showed up in the flow, not as a blocker.

“Agent 365 tooling servers will allow centralized governance for IT admins,” says Aisha Hasan, a principal product manager at Microsoft Digital. “That means a single pane where they can see what’s approved, who owns it, what data it touches, and then apply policy. We’re moving toward that consolidation so innovation continues while governance gets simpler and more consistent.”

We place guardrails where makers already work. In Copilot Studio, trusted and verified first-party MCP servers are allowed in developer environments to accelerate innovation and encourage experimentation. Riskier or complex MCP integration is available in Copilot Studio custom environments and other pro-code tools such as Microsoft 365 Agent Tool kit in VS Code and Microsoft Foundry, but only with clear checks: service ownership, security and privacy review, responsible AI assessment, and consent gating for high‑impact actions.

The allowlist is our north star.

Approved MCP servers and connectors live in one catalog with documented owners, scopes, and data boundaries. Makers choose from that shelf. If an MCP server uses an unverified tool, we enforce endpoint filtering. If there is misconfiguration, we open a task for the owner and help them build securely.

Permissions stay tight without adding cognitive load. Tokens are short‑lived and scoped to the task. Context is trimmed so only the necessary fields flow to the tool. Third‑party servers never get the full transcript. If a connector’s capabilities change, the runtime compares the new “voice” to what we approved. MCP Clients should pause risky actions, notify the owner, and resume automatically once reviewed.

With agent inventory in Power Platform Admin Center and registry in Agent 365, admins get a clean view on which connectors are active, who owns them, what data they touch, and how often they’re called. Organization policies such as DLP and MIP can be enforced in a unified way , with a re‑review when capabilities change. The goal is simple: let builders innovate confidently and securely while maintaining security and compliance.

“MCP servers are powerful AI tools that enable agents to seamlessly integrate and interact with enterprise data and transform business workflows,” Hasan says. “That means the same enterprise data and governance principles are applied equally to MCP servers and other connectors. A robust inventory, an agile policy framework, and an automated workflow for enforcement are cornerstones for successfully governing agents at scale.”

Securing MCP at scale: Operating, monitoring, and enabling

Our work doesn’t stop at go‑live. Once an MCP server is in the catalog, we operate the conversation like a service: measurable, observable, and responsive. Identity and policy guard the front door, but runtime is where we prove the controls work without slowing anyone down.

In practice, operating MCP at scale comes down to four motions:

Observe every tool call end to end. We make the flow observable. Every tool call carries a correlation ID from client to gateway to server and back. Prompts, tool selections, authorization decisions, and resource access should belogged with consistent schemas. Golden signals—latency, errors, saturation—sit alongside safety signals like unexpected egress or edits without consent. Owners and security teams see the same dashboards.

Detect drift and abnormal behavior early. Detection lives close to the work. We flag abnormal tool patterns, spikes in write operations, burst traffic from new geographies, and context sizes that don’t fit a task. We continuously compare a tool’s “voice” at connect time to the approved version; drift automatically pauses risky actions and pings the owner. Cost controls double as guardrails, using rate limits and budgets to cap blast radius and surface runaway loops early.

Respond with precision instead of blunt shutdowns. Response is graded, not binary. We can block destructive actions and allow reads, or throttle a noisy client without killing the session. Kill switches exist at both the client and the gateway. Playbooks are pre‑approved and integrated into the consoles owners already use, and dry runs are part of muscle memory, so the first switch flip doesn’t happen during an incident.

We treat model behavior as part of operations. Content safety and prompt shields run in production, not just in tests. We pin model versions and watch for output drift after updates. If a model starts suggesting tools out of character, the owner gets paged with the exact prompts and calls that triggered it.

Telemetry respects privacy. Logs avoid sensitive payloads by default and mask what must pass through for forensics. Access is role‑based, retention follows policy, and audit readiness is designed in on day one.

Enable builders through templates, education, and reuse. Adoption and education run in parallel. Builders get templates that enable best practices: sample manifests with consent gates, CI checks for token scope and SBOMs, and gateway stubs with sane defaults. A “ten‑minute preflight” runs locally to verify contracts, test consent flows, and check egress before a pull request is opened. IDE lint rules catch common issues early.

“This is how we operate MCP at scale,” says Janardhanan. “Observe the conversation, detect drift early, respond with precision, and teach habits that make the right path the easy path. We run it like a product because that’s what it is.”

Measuring results and moving forward

This program has changed how we build. Reviews move faster because every server follows the same path. Drift is caught early because clients compare a tool’s “voice” on connection. Shadow servers decline as inventory fills in from endpoint, repo, IDE, and gateway signals. Reuse increases because teams can discover trusted servers instead of creating new ones. Incidents resolve faster with correlation IDs across the conversation and kill switches at both the client and the gateway.

It’s also changed how our admins work. One gateway means one perimeter to manage. Policies land once and apply everywhere. Owners see the same telemetry security sees, so fixes happen where the work happens.

Going forward, we’re focused on more consolidation and automation. We’re moving toward a single pane for MCP governance—approve, monitor, and pause from one place. Policy-as-code will keep allowlists, consent rules, and egress boundaries versioned and testable in CI.

Our preflight checks will get smarter, with stronger injection tests, automatic egress validation, and environment‑aware templates. We’ll expand consent patterns so high‑impact actions remain explicit and auditable, even across multi‑tool chains. And we’ll keep shrinking re‑review time, so drift is measured in minutes, not days.

AI conversations are now part of how we build every day. MCP standardizes how agents talk to tools and data. Secure‑by‑default architecture, rigorous vetting, and a living inventory, ensure the right voices stay in the room, only what’s needed is shared, and drift is caught early.

The result is simple: teams ship faster with fewer surprises, and governance stays visible without getting in the way. We’ll keep tightening the loop, so saying yes remains both easy and safe.

Key takeaways

If you’re implementing MCP security, consider these key actions to ensure secure, efficient adoption in your organization:

  • Build governance into the maker flow. Embed security, consent, and responsible AI checks directly where teams build—so protection shows up by default, not as an afterthought.
  • Maintain a single allowlist and catalog. Centralize approved MCP servers and connectors with clear ownership, scope, and data boundaries.
  • Enforce scoped, short-lived permissions by default. Automatically limit token scope and duration to minimize risk and exposure.
  • Monitor continuously and detect drift early. Observe activity, flag deviations, and pause risky actions until reviewed and approved by owners.
  • Automate incident response and controls. Leverage pre-approved playbooks, kill switches, and rate limits for fast, precise action.
  • Design for privacy and auditability from day one. Mask sensitive data, restrict log access by role, and endure audit readiness.
  • Promote education and reuse. Provide templates, training, and feedback loops to encourage safe development and adoption of trusted servers.

The post Protecting AI conversations at Microsoft with Model Context Protocol security and governance appeared first on Inside Track Blog.

]]>
22324
Picking the right Copilot for the job: Tips from our experience at Microsoft http://approjects.co.za/?big=insidetrack/blog/picking-the-right-copilot-for-the-job-tips-from-our-experience-at-microsoft/ Thu, 12 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22334 Since its launch in 2023, Microsoft 365 Copilot has evolved from a single AI assistant into a full squad of powerful AI sidekicks, including chat, search, agents and many more. And with the introduction of agents, Copilot can now also act on your behalf—agents extend the capabilities of Microsoft 365 Copilot beyond conversation, giving you […]

The post Picking the right Copilot for the job: Tips from our experience at Microsoft appeared first on Inside Track Blog.

]]>
Since its launch in 2023, Microsoft 365 Copilot has evolved from a single AI assistant into a full squad of powerful AI sidekicks, including chat, search, agents and many more.

And with the introduction of agents, Copilot can now also act on your behalf—agents extend the capabilities of Microsoft 365 Copilot beyond conversation, giving you the power to elevate how you work, create, and make decisions.

 A photo of Etchells.

“Copilot agents free you from the manual work, so you can concentrate on big-picture thinking.”

Eva Etchells, senior content program manager, Microsoft Digital

The challenge today isn’t whether to use an agent or Copilot module to help you accomplish more—it’s knowing which one to use, and when to use it. Making the smart choice can help you produce amazing work while streamlining workflows and reducing friction.

“Copilot agents free you from the manual work, so you can concentrate on big-picture thinking,” says Eva Etchells, a senior content program manager in Microsoft Digital, the company’s IT organization.

Copilot thinks; agents ‘do’

Agents, simply explained, are purpose-built tools designed to automate tasks, handle repeatable work, and save time by improving efficiency. You can even create your own agents to match the way you work.

A photo of Burnett.

I use several agents to simplify repetitive daily tasks. They help me stay organized, quickly research what I need, and analyze information so I can focus my energy on the work that requires the most strategic thinking.”

Opeoluwa Burnett, senior product manager, Microsoft Digital

If Copilot and its modules help us think, create, and explore, then think of its agents as entities that execute and automate tasks.

Choosing the right agent or module is like selecting the right tool for a job: You want the one that fits the task at hand and helps you get your work done more quickly with less effort.

“Now I can quickly ask an agent to create a one page vision document in Word because the agent does the heavy lifting,” says Opeoluwa Burnett, a senior product manager in Microsoft Digital. “I use several agents to simplify repetitive daily tasks. They help me stay organized, quickly research what I need, and analyze information so I can focus my energy on the work that requires the most strategic thinking.”

Read about how Opeoluwa Burnett uses Copilot

A day in the life of a Microsoft employee using Copilot

Facing agent adoption challenges

At Microsoft, we’re still navigating a few common challenges related to agent adoption:

  • They have access to agents and the ability to create them but often feel overwhelmed or unsure where to begin.
  • For those still learning Copilot, agents can feel like an additional hurdle.
  • Others who’ve embraced “regular” Copilot may not realize that agents exist or know how to find them.

Our use of Copilot and AI agents continues to evolve. As Customer Zero within Microsoft Digital, we want to share how we’re using agents today, as well as what we’ve learned along the way.

Here’s a rundown of how our employees are using Copilot tools and agents to accomplish tasks faster and more efficiently:

Where to begin: Copilot Chat

Chat is often the starting point—the launchpad where you provide a prompt and kick off your interaction with Copilot.

Screenshot of the Copilot Chat launchpad.
The Copilot Chat module is where you can begin your interactions with Copilot.

Here you can search for general answers, explore complex queries, get quick results, or discover a Copilot agent that can help you complete your task.

Photo of Malekar.

“Copilot is a productivity booster. I can ask it to help me brainstorm and structure a use case and the results are pleasantly surprising, especially as the Copilot ecosystem continues to evolve and we fast-track new capacities.”

Swapna Malekar, principal product manager, Microsoft Digital

When Swapna Malekar needed to create a presentation with a short turnaround time, she turned to Copilot. So Malekar, a principal product manager in Microsoft Digital, shared a screenshot of the slide she was planning to present with Copilot. The tool generated a presentation-ready script she could then read aloud in her meeting later that day.

Now, she incorporates Copilot into her everyday workflows.

“Copilot is a productivity booster,” Malekar says. “I can ask it to help me brainstorm and structure a use case and the results are pleasantly surprising, especially as the Copilot ecosystem continues to evolve and we fast-track new capacities.”

Seamless workflows with Copilot applications

Because Copilot is built into Microsoft 365 apps like Word, Excel, PowerPoint, OneNote, OneDrive, and Teams, you can navigate seamlessly between tools without losing context. Your Copilot Chat history follows you, no matter where you start.

That flexibility means you can work the way you naturally do. You might start a Copilot Chat in Word while drafting a document, then switch to Excel or Teams and continue the same conversation without needing to reset or start over.

There’s no single “right” way to use Copilot. Everyone approaches work differently, but Copilot meets you where you are, whether in the browser or in your go-to app, while helping you reach the same solution.

“Choosing the right Copilot for the job is like being in one of those ‘Choose Your Own Adventure’ books,” Etchells says. “You pick the path you want to go, and you set off on your journey.”

Speed and efficiency: Copilot search

Copilot search shares the same underlying technology as chat. The purpose of the search function in Copilot is to process requests and retrieve results. The difference between the two lies in how the results are delivered.

Chat is designed for more explorational interactions, while search prioritizes fast, targeted access to content and links.

“The value prop for Copilot search is simple: Get what you’re looking for faster.”

Vasanthi Vangipurapu, senior product manager, Microsoft Digital

Search administrators also have access to the admin portal, where they can customize features such as bookmarks that know what employees are usually looking for when they search common terms.

“The value prop for Copilot search is simple: Get what you’re looking for faster,” says Vasanthi Vangipurapu, senior product manager in Microsoft Digital. “When I need specific answers quickly, I use Copilot search. If I want to explore further, I love that it redirects me to Copilot Chat to continue my conversation there.”

Any employee

What Copilot search can do: Find a shared file when you have limited details.

Sample prompt: “Find the file shared with me by (name) within the last six months. I don’t remember where it was shared. Search across Outlook, Teams, OneDrive, and SharePoint.”

Data compliance manager

What Copilot search can do: Understand what data Copilot can access, how it’s processed, and how residency and retention of data are handled.

Sample prompt: “Explain what data Microsoft 365 Copilot can access within my organization, including how it respects existing permissions and role-based access controls. Describe how data residency is handled for Copilot processing and outline what logging, retention, and audit trail information is available to administrators.”

Technical writer

What Copilot search can do: Generate a cloud architecture diagram or flow chart to support documentation.

Sample prompt: “Create a vector-style cloud architecture diagram showing users, load balancers, web servers, application tier, and cloud database. Use minimalistic icons, blue/gray palette, simple arrows, and white background.”

Visuals at your fingertips

Copilot Create is a design generator that helps you produce visual assets such as images, posters, infographics, banners, branding, and video. It’s an especially useful tool for people who aren’t professional designers, but who need to create visuals quickly as part of their workflow.

The Create module also supports rapid iteration, making it easy to refine results without starting from scratch. You can adjust layout, tone, or visual direction through simple prompts. This lets you explore multiple approaches and keep creative momentum without getting bogged down in detail.

Screenshot of the Create module landing page in Copilot.
You can use the Copilot Create module to generate a variety of compelling visual assets.

You can give Create a prompt, even a rough one. It often results in unexpected visual directions you may have not considered on your own—a bit like having an enthusiastic creative partner who’s tossing new issues and helping you discover fresh variations.

While you can also use Copilot Chat to generate visual assets, Copilot Create offers a consolidated experience specifically built for visual design.

Here are prompts you can try in the Copilot Create module:

Marketing manager

What Copilot Create can do: Turn a PowerPoint deck into a branded marketing video for a product launch.

Sample prompt: “Turn this PowerPoint deck into a high-quality, 45- to 60-second marketing video designed for prospects and customers.

Tone: modern, energetic, and brand-aligned

Include: clear voiceover script, punchy on-screen text, smooth transitions

Highlight: key value props and visuals from each slide

Add: subtle animation and upbeat music

Output: 1080p MP4 video + options for a shorter cut and social formats”

HR manager

What Copilot Create can do: Create an employee-friendly infographic from a policy document.

Sample prompt: Turn this HR policy document into an engaging infographic.

Audience: all employees

Style: simple, friendly, and easy to scan

Include: key rules, do/don’t lists, and any required steps

Use: icons, color coding, and clean layout

Output: a single-page PNG plus a version sized for intranet posting”

Analysis and insights: Copilot Researcher agent

The Copilot Researcher agent acts as your supercharged research partner, providing deep analysis and generating detailed reports. You can use Researcher to quantify the expected impact of a new feature, gather usage data, analyze audience insights, and project outcomes based on target user logistics.

Here are some prompts you can use to get started with Copilot Researcher:

Product manager

What Researcher agent can do: Quarterly product feature planning

Sample prompt: “Review emails, files, and meeting transcripts, to surface insights about where employees experience friction in daily workflows.”

Business analyst

What Researcher agent can do: Documentation optimization and process improvement

Sample prompt: “Analyze the following documentation and generate detailed, actionable ideas to improve clarity, structure, usefulness, and alignment with business goals.”

Engineer

What Researcher agent can do: Improve upon code

Sample prompt: “I want to improve the following code for a software feature (insert detailed description, including the software name, programming language, targeted platforms, and what it does). Help me come up with ways to make the code better using best practices. Generate clean, optimized code and explain the rationale behind each decision.”

Streamlined operations: Employee Self-Service Agent

The Employee Self-Service Agent helps employees quickly find answers to their questions relating to human resources, IT support, and campus services topics.

This tool now serves as a centralized entry point for HR, IT, and facilities support at Microsoft. The agent removes the guesswork, delays, and frustration that our employees used to experience when searching across multiple systems, websites, and knowledge bases for answers to their employment-related queries.

“Our employees rely on AI tools like Copilot to help get their work done,” says Becky West, a principal group product manager for Microsoft Digital. “And the same is now true for resolving an issue related to facilities and other high-prio employee self-service topics.”

Here are some prompts that you can use to get help from your Employee Self-Service Agent:

Intelligent collections: Copilot Notebooks module

The Copilot Notebooks module is an interactive workspace that combines the flexibility of a notebook with the intelligence of an AI notepad. Copilot makes it easy to add your chats to a Copilot Notebook, where it can review all included content, summarize information, and answer questions about it—making it easier to navigate large collections of files, presentations, and notes. Notebooks can also be shared, making them useful for teams collaborating on a common goal.

For perspective, Copilot Notebooks is designed for project-based work where you can gather files, references, notes and have Copilot collectively reason over them, while Copilot in OneNote enhances notetaking, content creation and not project-specific reference modeling.

Some of our employees use Copilot Notebooks to prepare for their performance reviews. Instead of scrambling to gather six months of their work, feedback, and other documentation, they easily can assemble everything in one place using the Notebooks module.

“I can take all the campaigns I’ve worked on, the metrics, and any praise I’ve received, drop it all into a Word doc and add it to my Review notebook,” Etchells says. “Then I ask Copilot to tell me how I contributed to each campaign. It saves me a ton of time.”

Here are prompts you can use in the Copilot Notebooks module to do something analyze the impact you have had as a seller over a certain window of time: 

I’m a seller and I want to summarize my impact over the last quarter

What content Notebooks can hold

Pipeline health analyses, accounts prioritized based on intent signals, deal outcomes correlated with activities (calls, emails, meetings), QBR visuals

Sample prompt to create Notebook

“I’m a sales executive. Build me a Copilot notebook that:

Ingests CRM CSV/XLSX, validates schema, and summarizes columns.

Computes KPIs (pipeline value, #opps, win rate, avg cycle) and visuals: stage value bar, conversion funnel, win-rate heatmap (industry × product).

Flags stale/stuck opportunities; creates a transparent 0-100 risk score with explainable factors; outputs Top 20 risky + Top 20 high-potential deals.

Builds a simple forecast (optimistic/likely/conservative) from historical stage-to-win rates and charts forecast vs. target.

Surfaces segment/account insights; exports 2 CSVs (prioritized exec‑outreach + risk register).

Generates a 1-page executive summary, 5-7 QBR bullets, and a 3-sentence email for the field.”

SharePoint agents

SharePoint offers two types of Copilot agents: the built-in Knowledge agent and a custom agent.

Photo of Bodhanampati.

“You ask the question and the agent provides the answer, so you can focus on the work, not the search.”

Sunitha Bodhanampati, senior product manager, Microsoft Digital

The Knowledge agent acts like a SharePoint content steward, analyzing and organizing content across your sites. It tags and structures information in ways that allow Copilot to deliver more accurate answers to site-related queries.

You can also create custom agents to manage specialized workflows, connectors, or administrative tasks. You define the agent’s rules and scenarios, and it can operate across other apps and external systems, not just SharePoint.

“Instead of navigating countless folders, files, and links, agents remove the need to remember where information lives,” says Sunitha Bodhanampati, a senior product manager in Microsoft Digital. “You ask the question and the agent provides the answer, so you can focus on the work, not the search.”

Here are some SharePoint agent prompts you can try, depending on your role:

Content manager/site owner

What the agent can do: The Knowledge agent can update and improve content quality so Copilot can reason more accurately across it.

Sample prompt: “Review this library and auto-tag all documents with owner, category, and review date info. Show me any pages with missing details or broken links.”

HR helpdesk

What the agent can do: The SharePoint custom agent can create an agent that responds to department-specific questions using SharePoint data or other systems.

Sample prompt: “Create an agent that answers policy questions using our HR SharePoint library and routes complex requests to the HR team.”

Operations analyst

What the agent can do: The SharePoint custom agent can build a multistep workflow agent that merges with CRM and ticketing.

Sample prompt: “Build an agent that checks open support tickets, summarizes urgent ones, retrieves related SharePoint documentation, and notifies the team in Teams.”

Business owner

What the agent can do: The SharePoint custom agent can standardize approvals and record‑keeping across sites—validating required fields, routing items for review, posting updates, and compiling summaries—so routine requests move faster with clear ownership. (You can also tailor its behavior and starter prompts when you create it.)

Sample prompt: “Build an agent that validates new entries in the ‘Procurement Requests’ list, routes them to the right approver, writes back status and PO number when approved, and posts a daily summary with exceptions to our Teams channel.”

Site visitor

What the agent can do: The ready‑made SharePoint agent (included on every site) acts like a site concierge—answering questions, summarizing pages and libraries, and pointing people to the right documents and owners, all scoped to the site and the visitor’s permissions.

Sample prompt: “I’m new to this site—give me a two‑paragraph overview, list the three most important pages to read this week with their owners, and build a one‑page starter checklist with links.”

Create your own agent

If you don’t find a Copilot agent that meets your needs, you can create your own. Getting started is as easy as telling Copilot what your ultimate objective is, even if you don’t have all the specifics.

“Just ask Copilot, ‘How do I get started with an agent?’” Etchells says. “Copilot will walk you through it, step-by-step.”

One of our teams in Microsoft Digital built an internal agent we dubbed the Copilot Agent Ideation Partner. This is useful for employees who are just exploring or ready to build, as it helps employees brainstorm agent ideas by spotting repetitive tasks, uncovering work patterns, and turning everyday challenges into actionable concepts they can build into an agent.

“Every employee should build at least one agent,” Burnett says. “When you turn your daily patterns into an agent, you reclaim your time and free yourself up to focus on the work that matters most.

The future of agents

Each agent and module has its own unique strengths. Together, they are part of a broader, AI-powered shift toward helping our employees be more productive and efficient every day.

As the number and variety of agents grows, we’re continuing to raise awareness among employees and our customers about what agents are available and how they can start putting these game-changing capabilities to work.

“We’re still focused on helping people understand what agents can do and how they fit into our everyday work,” Etchells says. “As agents evolve, the goal is to make them easier to discover, try out, and apply within the workflows our employees are already used to.”

Key takeaways

Here are some things to keep in mind as you move along in your journey with Copilot agents and modules:

  • Copilot is more than one tool. You can choose from multiple Copilot modules and agents designed for different tasks, roles, and scenarios.
  • Selecting the right Copilot unlocks targeted results. Matching the right Copilot to the job reduces friction and helps create seamless workflows.
  • Copilot agents enhance productivity and creativity. Whether through Copilot Chat, search, research, notebooks, or other specialized agents, each Copilot agent unlocks efficiency while sparking innovative ideas.
  • Copilot agents are evolving into collaborators. These agents are reshaping how people learn, work, and innovate every day.

The post Picking the right Copilot for the job: Tips from our experience at Microsoft appeared first on Inside Track Blog.

]]>
22334