governance Archives - Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/tag/governance/ How Microsoft does IT Tue, 14 Apr 2026 17:15:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 137088546 Harnessing AI: How a data council is powering our unified data strategy at Microsoft http://approjects.co.za/?big=insidetrack/blog/harnessing-ai-how-a-data-council-is-powering-our-unified-data-strategy-at-microsoft/ Thu, 09 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23030 Information technology is an ever-evolving landscape. Artificial Intelligence is accelerating that evolution, providing employees with unprecedented access to information and insights. Data-driven decision making has never been more critical for businesses to achieve their goals. In light of this priority, we have established a Microsoft Digital Data Council to help accelerate our companywide AI-powered transformation. […]

The post Harnessing AI: How a data council is powering our unified data strategy at Microsoft appeared first on Inside Track Blog.

]]>
Information technology is an ever-evolving landscape. Artificial Intelligence is accelerating that evolution, providing employees with unprecedented access to information and insights. Data-driven decision making has never been more critical for businesses to achieve their goals.

In light of this priority, we have established a Microsoft Digital Data Council to help accelerate our companywide AI-powered transformation.

Our data council is a cross-functional team with representation from multiple domains within Microsoft, including Microsoft Digital, the company’s IT organization; Corporate, External, and Legal Affairs (CELA); and Finance.

A photo of Tripathi.

“By championing robust data governance, literacy, and responsible data practices, our data council is a crucial part of our AI-powered transformation. It turns enterprise data into a strategic capability that fuels predictive insights and intelligent outcomes across the organization.”

Naval Tripathi, principal engineering manager, Microsoft Digital

Our data council’s mission is to drive transformative business impact by establishing a cohesive data strategy across Microsoft Digital, empowering interconnected analytics and AI at scale. Our vision is to guide our organization toward Frontier Firm maturity through a clear blueprint for high-quality, reliable, AI-ready data delivered on trusted, scalable platforms.

“By championing robust data governance, literacy, and responsible data practices, our data council is a crucial part of our AI-powered transformation,” says Naval Tripathi, principal engineering manager in Microsoft Digital. “It turns enterprise data into a strategic capability that fuels predictive insights and intelligent outcomes across the organization.”

Our evolving data strategy

Over the past two decades, we at Microsoft—along with other large enterprises—have continuously evolved our data strategies in search of the right balance between control and agility. Early approaches were highly decentralized, with different teams owning and managing their own data assets. While this enabled local optimization, it also resulted in inconsistent quality and limited enterprise-wide insight.

Our subsequent shift toward centralized data platforms brought much-needed standardization, security, and scalability. However, as data platforms grew more sophisticated, ownership often drifted away from the business domains closest to the data, slowing responsiveness and diluting accountability.

Today, we and other leading companies are embracing a more balanced, federated approach, often described as a data mesh. Rather than forcing all our data into a single centralized system or allowing unchecked decentralization, the data mesh formalizes domain ownership while embedding governance, quality, and interoperability directly into shared platforms.

With this approach, our domain teams publish data as well-defined, discoverable products, while common standards for security, metadata, and compliance are enforced through automation rather than manual processes. This model preserves enterprise trust and consistency without sacrificing speed or autonomy.

By adopting a data mesh mindset, we can scale analytics and AI more effectively across the organization while still keeping ownership closely connected to the business focus. The result is a system that supports innovation at the edges, strong governance at the core, and seamless collaboration across domains, enabling the transformation of data from a technical asset to a strategic, enterprise-wide capability.

Quality, accessibility, and governance

To scale enterprise data and AI, organizations must first ensure their data is trusted, discoverable, and responsibly governed. At Microsoft Digital, our data strategy is designed to create data foundations that power intelligent applications and effective decision making across the company.

A photo of Uribe.

“High-quality, well-governed data is essential to accelerate implementation and adoption of AI tools. Data quality, accessibility, and governance are imperatives for AI systems to function effectively, and recognizing that is propelling our data strategy.”

Miguel Uribe, principal PM manager, Microsoft Digital

By implementing a data mesh strategy at scale, we aim to unlock valuable data insights and analytics, enabling advanced AI scenarios. Our data council focuses on three core dimensions that make AI-ready data possible:

  • Quality: Making sure enterprise data is reliable and complete
  • Accessibility: Enabling secure and discoverable access to data
  • Governance: Protecting and managing our data responsibly

Together, these dimensions form the foundation for scalable innovation and AI-powered data use. They connect data silos and ensure consistent, high‑quality access across the enterprise—enabling both humans and AI systems to work from the same trusted data foundation. As AI use cases mature, this foundation allows AI agents to retrieve and reason over data through enterprise endpoints, while supporting advanced analytics, data science, and broader technology.

“High-quality, well-governed data is essential to accelerate implementation and adoption of AI tools,” says Miguel Uribe, a principal PM manager in Microsoft Digital. “Data quality, accessibility, and governance are imperatives for AI systems to function effectively, and recognizing that is propelling our data strategy.”

Quality

AI-ready data is available, complete, accurate, and high-quality. By adopting this standard, our data scientists, engineers, and even our AI agents are better able to locate, process, and govern the information needed to drive our organization and maximize AI efficiencies.

By utilizing Microsoft Purview, our data council can oversee the monitoring of data attributes to ensure fidelity. It also monitors parameters to enforce standards for accuracy and completeness.

Accessibility

Ensuring that our employees get access to the information they need while prioritizing security is a foundational element of our enterprise data strategy. Microsoft Fabric allows us to unify our organization’s siloed data in a single “mesh” that enables advanced analytics, data science, data visualization and other connected scenarios.

Microsoft Purview then gives us the ability to democratize that data responsibly. By implementing a data mesh architecture, our employees can work confidently, unencumbered by siloed or inaccessible data, and with the assurance that the data they’re working with is secure.

A graphic shows how the data mesh architecture allows employees to access data they need, with platform services and data management zones surrounding this architecture.
The data mesh architecture enables our employees to do their work efficiently while preventing the data they’re working on from becoming siloed.

The data mesh connects and distributes data products across domains, enabling shared data access and compute while scaling beyond centralized architectures.

Platform services are standardized blueprints that embed security, interoperability, policies, standards, and core capabilities—providing guardrails that enable speed without fragmentation.

Data management zones provide centralized governance capabilities for policy enforcement, lineage, observability, compliance, and enterprise-wide trust.  

Governance

As organizations scale AI capabilities, strong governance becomes essential to ensure security, compliance, and ethical data use. Data governance—which includes establishing data policies, ensuring data privacy and security, and promoting ethical AI usage—is critical, as is compliance with General Data Protection Regulation (GDPR) and Consumer Data Protection Act (CDPA) regulations, among others.

However, governance is not only a technical capability; it’s also a cultural commitment.

Responsible data use must be embedded into the way teams manage data and build AI solutions. Through Microsoft Purview, we implemented an end-to-end governance framework that automates the discovery, classification, and protection of sensitive data across the enterprise data landscape.

This unified approach allows teams to innovate confidently, knowing that the data powering their insights and AI systems is trusted and protected, as well as responsibly managed.

“AI systems are only as reliable as the data that powers them,” Uribe says. “By investing in trusted and well-managed data, we accelerate not only the adoption of AI tools but our ability to generate meaningful insights and intelligent outcomes.”

The data catalog as the discovery layer

By serving as a common discovery layer for humans and AI, the data catalog ensures that governance translates directly into speed, accuracy, and trust at scale.

A unified data strategy only succeeds if both people and AI systems can consistently find the right data. At Microsoft, this is enabled by our enterprise data catalog, which operationalizes the standards set by our data council. 

For business users, the catalog provides intuitive search, ownership transparency, and trust signals—enabling confident self‑service analytics. For AI agents, the same catalog exposes machine‑readable metadata, allowing agents to programmatically discover canonical datasets, validate schema and freshness, and respect governance constraints.

Our role as Customer Zero

In Microsoft Digital, we operate as Customer Zero for the company’s enterprise solutions, so that our customers don’t have to.

That means we do more than adopt new products early. We deploy them at enterprise-scale, operate them under real‑world constraints, and hold them to the same standards our customers expect. The result is more resilient, ready‑to‑use solutions and a higher quality bar for every enterprise customer we serve.

A photo of Baccino.

“When we engage product teams with real telemetry from how data is created, governed, and consumed at scale, we move the conversation from theory to execution. That’s how enterprise readiness becomes real.”

Diego Baccino, principal software engineering manager, Microsoft Digital

Our data council embodies this Customer Zero mindset through its Enterprise Readiness initiative. By engaging product engineering as a unified enterprise voice, the council drives strategic conversations that surface operational blockers, influence roadmap prioritization, and ensure new and existing data solutions are truly ready for enterprise use.

These learnings are then shared broadly across Microsoft Digital to accelerate adoption, reduce duplication, and scale proven patterns across teams.

“When we engage product teams with real telemetry from how data is created, governed, and consumed at scale, we move the conversation from theory to execution,” says Diego Baccino, a principal software engineering manager in Microsoft Digital and a member of the council. “That’s how enterprise readiness becomes real.”

This work is deeply integrated with our AI Center of Excellence (CoE), where Customer Zero principles are applied to accelerate AI outcomes responsibly. Together, the AI CoE and the data council focus on improving data documentation and quality—foundational capabilities that are required to make AI feasible, trustworthy, and scalable across the enterprise.

By grounding AI innovation in measurable data quality and governance standards, Microsoft Digital ensures that experimentation can safely mature into production‑ready solutions. The partnership between our data council, our AI CoE, and our Responsible AI (RAI) Council is essential to our broader data and AI strategy.

“AI readiness isn’t aspirational—it’s operational,” Baccino says. “By measuring the health of our data, setting clear quality baselines, and using those signals to guide product and platform decisions, we turn data into a strategic asset and AI into a repeatable capability.”

Together, these teams exemplify what it means to be Customer Zero: Transforming enterprise experience into action, governance into acceleration, and data into durable competitive advantage.

Advancing our data culture

Our data council plays a pivotal role in advancing the organization transition from data literacy to enterprise data and AI capability. In conjunction with our AI CoE, it creates curricula and sponsors learning pathways, operational practices, and community programs to equip our employees with the skills and mindset required to thrive in a data- and AI-centric world.

While early efforts focused on improving data literacy, our data council ’s mission has evolved to enable data and AI capability at scale together with our AI CoE—where employees not only understand data but can effectively apply it to build, operate, and govern intelligent solutions.

“Our focus is not just teaching our teams about data. It is enabling employees to apply data to create AI-driven outcomes. When teams understand how data powers AI systems, they can make better decisions, design better products, and build more responsible AI experiences.”

Miguel Uribe, principal product manager, Microsoft Digital

Our curriculum includes high-level courses on data concepts, applications, and extensibility of AI tools like Microsoft 365 Copilot, as well as data products like Microsoft Purview and Microsoft Fabric.

By facilitating AI and data training, offering internally focused data and AI certifications, and internal community engagement, our council ensures that employees develop the capabilities required to responsibly build and operate AI-powered solutions. Achieving data and AI certifications not only promotes career development through improved data literacy, it also enhances the broader data-driven culture within our organization.

“We recognize that AI capability is built when data skills are applied directly to real AI scenarios and business outcomes—not when learning exists in isolation,” Uribe says. “Our focus is not just teaching our teams about data; it is enabling employees to apply data to create AI‑driven outcomes. When teams understand how data powers AI systems, they can make better decisions, design better products, and build more responsible AI experiences.”

Lessons learned

Our data council was created to develop and execute a cohesive data strategy across Microsoft Digital and to foster a strong data culture within our organization. Over time, several critical lessons have emerged.

Executive sponsorship enables transformation

Executive sponsorship is a key element to ensure implementation and adoption of a data strategy. Our leaders are committed to delivering and sustaining a robust data strategy and culture and have been effective champions of the council’s work.

“Leadership provides support and reinforcement of the council’s mission, as well as guidance and clarity related to diverse organizational priorities,” Baccino says.

Cross-functional collaboration accelerates impact

Our council’s work has also benefited from the diverse representation offered by different disciplines across our organization. Embracing diverse perspectives and understanding various organizational priorities is critical to implementing a successful data strategy and culture in a large and complex organization like Microsoft Digital.

Modern platforms allow for scalable AI productivity

Technology and architecture also play a critical role in enabling enterprise data and AI capability. Platforms like Microsoft Purview and Microsoft Fabric provide the governance, discovery, and analytics infrastructure required to create trusted, AI-ready data ecosystems.

Combined with strong leadership support and community engagement, these platforms allow our organization to move beyond isolated data projects toward connected, enterprise-wide intelligence.

As our organization continues to evolve, our data council’s strategic work and valuable insights will be crucial in shaping the future of data-driven decision making and AI transformation at Microsoft.

Key takeaways

Here are some things to keep in mind as you contemplate forming a data council to help you manage and scale AI impacts responsibly at your own organization:

  • A data mesh strikes the balance enterprises have been chasing. By formalizing domain ownership while enforcing standards through shared platforms, you avoid both chaotic decentralization and slow, over-centralized control.
  • Governance is an accelerator when it’s automated and embedded. Using platforms like Microsoft Purview and Microsoft Fabric, governance shifts from a manual gatekeeping function to a built‑in capability that enables faster, trusted analytics and AI.
  • AI systems are only as strong as their discovery layer. A unified enterprise data catalog allows both people and AI agents to find, trust, and use data consistently—turning standards into operational speed.
  • Customer Zero turns theory into enterprise‑ready execution. By operating its own data and AI platforms at scale, Microsoft Digital provides real telemetry and practical feedback that directly shapes product readiness.
  • Building AI capability is a cultural effort, not just a technical one. Our data council’s focus on applied learning, certification, and real-world AI scenarios ensures data skills translate into durable business outcomes.
  • AI scale exposes the cost of fragmented data ownership. A data council cuts through silos by aligning priorities, resolving tradeoffs, and concentrating investment on the data assets that matter most for AI impact.
  • Shared metrics create shared ownership. Publishing data quality and AI‑readiness scores at the leadership level reinforces accountability and positions data as a core enterprise asset.

The post Harnessing AI: How a data council is powering our unified data strategy at Microsoft appeared first on Inside Track Blog.

]]>
23030
Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft http://approjects.co.za/?big=insidetrack/blog/responsible-ai-why-it-matters-and-how-were-infusing-it-into-our-internal-ai-projects-at-microsoft/ Thu, 26 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=19289 Like the computer itself and electricity before it, AI is a transformational technology. It’s providing never-before-seen opportunities to reimagine productivity, address major social challenges, and democratize access to technology and knowledge. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic […]

The post Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft appeared first on Inside Track Blog.

]]>
Like the computer itself and electricity before it, AI is a transformational technology. It’s providing never-before-seen opportunities to reimagine productivity, address major social challenges, and democratize access to technology and knowledge.

As AI reshapes how we work and live, it brings with it both transformative potential and complex challenges. Across the industry, concerns about bias, safety, and transparency are growing.

At Microsoft, we believe that realizing AI’s benefits requires a shared commitment to responsibility—one we take seriously. As a result, we aren’t just creating AI solutions. We’re taking the lead on infusing responsible AI principles into our technology and organizational practices.

Prioritizing responsible AI across Microsoft

The most impressive AI-powered capabilities in the world mean nothing if people don’t trust the technology. Microsoft and many of our customers across all industries are working to strike the right balance between innovation and responsibility.

“We’re on a multi-year journey born out of the need to support innovation—and do it in a way that builds trust. Along the way, we’ve continued to iterate and evolve the program through a series of building blocks.”

Mike Jackson, head of AI Governance, Enablement, and Legal, Microsoft Office of Responsible AI

IT leaders and CXOs aren’t just deploying AI tools. They’re also thinking of the right guardrails to implement around those tools as their organizations mature. Meanwhile, developers and deployers want to be sure they’re building and implementing AI solutions within the bounds of responsibility.

As an organization that’s mapping the frontier of AI while creating business-ready tools for our customers, Microsoft is shaping the global conversation on responsible AI. We don’t only accomplish that through policy and governance, but also by embedding responsibility into the ways we build, deploy, and scale AI.

Laying the foundation for this work is the duty of our Office of Responsible AI (ORA). This team brings policy and governance expertise to the responsible AI ecosystem at Microsoft.

“We’re on a multi-year journey born out of the need to support innovation—and do it in a way that builds trust,” says Mike Jackson, head of AI Governance, Enablement, and Legal for the Office of Responsible AI. “Along the way, we’ve continued to iterate and evolve the program through a series of building blocks.”

ORA advances AI development, deployment, and secure and trustworthy innovation through governance, legal expertise, internal practice, public policy, and guidance on sensitive uses and emerging technology. The team focuses on empowering innovation while ensuring it falls within Microsoft’s governance, compliance, and policy guardrails.

ORA also partners closely with product and engineering teams as well as other trust domains like privacy, digital safety, security, and accessibility. The team created our Microsoft Responsible AI Standard, the cornerstone of our governance framework, and ensures internal AI initiatives align with it.

The Responsible AI Standard translates our six principles into actionable requirements for every AI project across Microsoft:

Fairness

AI systems should treat all people equitably. They should allocate opportunities, resources, and information in ways that are fair to the humans who use them.

Privacy and security

AI systems should be secure and respect privacy by design.

Reliability and safety

AI systems should perform reliably and safely, functioning well for people across different use conditions and contexts, including ones they weren’t originally intended for.

Inclusiveness

AI systems should empower and engage everyone, regardless of their background, striving to be inclusive of people of all abilities.

Transparency

AI systems should ensure people correctly understand their capabilities.

Accountability

People should be accountable for AI systems with oversight in place so humans can maintain accountability and remain in control.

ORA reports into the Microsoft Board of Directors and collaborates with stakeholders and teams across the company to operationalize these principles, implementing policies and practices that apply to AI applications. They determined that every AI initiative should undergo an impact assessment to ensure it aligns with the standard.

If ORA is our compass for responsible AI, our companywide Responsible AI Council has its hands on the steering wheel.

The council, led by Chief Technology Officer Kevin Scott and Vice Chair and President Brad Smith, was formed at the senior leadership level as a forum and source of representation across research, policy, and engineering. It provides leadership, strategic guidance, and executive support and sponsorship to advance strategic objectives around innovation and responsible AI.

A photo of Tripathi.

“ORA has established clear principles and a step-by-step assessment framework and tool. Our responsibility is to rigorously follow this process and ensure compliance across our products and initiatives.”

Naval Tripathi, principal engineering manager and co-lead, Microsoft Digital Responsible AI team

Under the council’s guidance, responsible AI CVPs, division leaders, and a network of responsible AI champions across the company operationalize the implementation of our Responsible AI Standard and compliance with our policies.

The structure of these teams is straightforward.

Every division has a designated CVP and division lead to steer the work and connect their team to the overarching Responsible AI Council. Within those divisions, each organization has a lead responsible AI champion or a set of co-leads to steer their team of champions. Those champions act as subject matter experts, reviewers for the impact assessment process, and points of contact for the teams developing AI initiatives.

Implementing AI governance within Microsoft IT

As members of the company’s IT organization, Microsoft Digital’s responsible AI division lead and champion team have a special role to play. They helped develop a critical internal workflow tool, which has now become a mandatory part of our responsible AI assessment process.

“The key is to ensure full alignment of responsible AI practices with ORA,” says Naval Tripathi, principal engineering manager and co-lead for Microsoft Digital’s Responsible AI Team. “ORA has established clear principles and a step-by-step assessment framework and tool. Our responsibility is to rigorously follow this process and ensure compliance across our products and initiatives.”

This tool logs every project, guides AI developers through initial impact assessments all the way to final reviews, and facilitates those workflows for champions.

A photo of Po.

“As organizations develop a diverse ecosystem of AI agents, often created by multiple engineering teams, it becomes essential to establish a standardized evaluation process. This ensures every agent adheres to enterprise-level standards before we deploy and distribute it to end users.”

Thomas Po, senior product manager, Microsoft Digital

By streamlining the process through a unified portal, the tool increases efficiency and minimizes errors that can arise from manual processes. It also encourages teams to make responsible AI part of the software development lifecycle (SDL) itself, not a hurdle or an afterthought.

“As organizations develop a diverse ecosystem of AI agents, often created by multiple engineering teams, it becomes essential to establish a standardized evaluation process,” says Thomas Po, a senior product manager working on Campus Services agents. “This ensures every agent adheres to enterprise-level standards before we deploy and distribute it to end users. That makes it more manageable in the long term, and having it all in one tool gives us more transparency.”

Our unified internal workflow looks like this:

  • Project initiation and system registration: During the design phase for an AI initiative, the engineering team accesses the portal and registers a new AI system. From there, they fill out fields with crucial information, including a title, description, the developer team’s division, whether the project will include internal or external resources, the relevant champion who should review their initiative, and other details. Within this initial form, different scenarios will trigger different review parameters and requirements, for example, when a team intends to publish a tool externally or engage with sensitive use cases.
  • Release assessment: After the system registration is complete, the team initiates the release assessment, a much more thorough review designed to ensure the AI-powered solution is ready to go live. At this point, the engineering team needs to provide detailed documentation. That includes the volume and kinds of data the system will use, potential harms and mitigations, and more. A release assessment includes experts in our Office of Responsible AI, Security, Privacy, and other teams, who review sensitive use cases or initiatives that include generative AI.

If the project clears all the requirements and reviews, it’s ready to go live. Crucially, we don’t think of these stages as a set of hurdles teams need to clear to complete their projects. Instead, the process guides engineering teams through the design elements they need to consider and provides opportunities for feedback from subject matter experts.

“The tool captures all the requirements from ORA and incorporates them into a developer-friendly workflow,” says Padmanabha Reddy Madhu, principal software engineer and responsible AI champion for Employee Productivity Engineering in Microsoft Digital. “It’s also a great way to pull AI champions into the design phase so we can support our colleagues’ work.”

With more than 80 AI projects currently underway across Microsoft Digital, logging and streamlining are essential. Teams are working on all kinds of ways to boost enterprise processes and employee experiences, like the following examples from Campus Services that users can access through our Employee Self-Service Agent:

  • A facilities agent helps employees take action when they discover an issue at one of our buildings, like a burnt-out light, a spill, or physical damage. The agent creates a ticket to alert a Facilities team so they can resolve it and allows the submitter to follow up on progress.
  • A campus event agent makes onsite gatherings like talks and Microsoft Garage build-a-thons more discoverable through simple queries. Using this agent, employees can more easily discover and plan around events that interest them, adding value to the in-person experience and incentivizing community.
  • A dining agent addresses the challenges of multiple on-campus restaurants featuring menu options that shift daily. Employees can use natural language queries like “Where can I get teriyaki today?” The agent does the rest. This kind of agent can be especially helpful for employees with allergies or dietary restrictions, providing a boost to accessibility for the on-campus dining experience.
A photo of Wu.

“AI is rapidly becoming a standard part of how we build and operate. As adoption accelerates, Responsible AI becomes imperative and enables teams to innovate at speed while maintaining safety and accountability at scale.”

Qingsu Wu, principal group product manager, Microsoft Digital

Our policies and practices have embedded a culture of responsibility and trust into our internal AI development processes. With that trust comes the confidence to experiment.

“AI is rapidly becoming a standard part of how we build and operate,” says Qingsu Wu, principal group product manager in Microsoft Digital. “As adoption accelerates, Responsible AI becomes imperative and enables teams to innovate at speed while maintaining safety and accountability at scale. By embedding Responsible AI into our engineering practices, teams have the clarity and confidence they need to manage risk proactively and deliver value without compromising safety or trust.”

Far from thinking of responsible AI assessments as an administrative or policy burden that creates additional work, teams now recognize their benefits. They look at the process as an extra set of eyes from a trusted partner. By minimizing legal and compliance risks through our Responsible AI Council’s expertise, our teams save time and stress, and we avoid problems like delayed releases or rollbacks.

A photo of Smith.

“What we’re doing is entirely novel in the tech world. Microsoft is really the lead learner here, and we have a passion for corporate citizenship that we’re embedding in our tools.”

Jamian Smith, principal product manager and co-lead, Microsoft Digital Responsible AI team, Microsoft Digital

Lessons learned: Embedding responsible AI into our development efforts

Throughout this process, we’ve learned lessons that will be helpful for other organizations just beginning their AI journeys:

  • We empowered early adopters and enthusiasts as responsible AI champions. They act as anchors and resources for developers who use AI, so we made sure they had the knowledge and training they needed to unlock downstream value.
  • Culture has been crucial to our success, especially our growth mindset and our focus on trust. Emphasizing these aspects of our company culture helped us embed responsible AI into core SDL processes and naturalize it on our engineering teams.
  • Processes are one thing, and tooling is another. If your responsible AI assessment workflow isn’t attuned to your needs, simply building a review portal tool won’t get you the rest of the way. First, we thought about the process we needed to put in place to solidify responsible AI practices and support our teams’ work. Then we built a tool that supports those workflows as easily and seamlessly as possible.
  • Accuracy is reliant on data, and data has a tendency to reflect the biases of the humans who organize it. It’s necessary to correct bias actively through introspection and testing.

“What we’re doing is entirely novel in the tech world,” says Jamian Smith, principal product manager and co-lead for Microsoft Digital’s Responsible AI team. “Microsoft is really the lead learner here, and we have a passion for corporate citizenship that we’re embedding in our tools.”

As your organization begins to experiment with its own AI projects, take these concrete steps to infuse responsibility into the solutions you create:

  1. Establish a strong foundation based on core principles and standards that align with your organizational culture. The Microsoft Responsible AI Standard is a great place to start because it reflects our experience and the expertise we’ve built as AI technology leaders and providers.
  2. Seek out the activators across your organization: people with a passion for AI, security, transparency, and other challenge areas, along with a willingness to learn and the ability to lead. Think about how to place them in both centralized and distributed positions.
  3. With the rapidly evolving regulatory climate around AI, it’s crucial to have a broad understanding of compliance and continue to follow its developments. Involve dedicated regulatory, compliance, and legal professionals in researching and monitoring global standards while communicating that information to your organization, particularly through training and updates that help teams adapt new regulations into their core processes.
  4. Create a process for responsible AI assessment. Consider ways to break it into stages that propel projects forward rather than hindering them. Enlist the right people to assess projects, and consider tooling that streamlines actions for both creators and assessors. Our AI Impact Assessment Guide can help you get started.
  5. Benefit from pioneers in the space, including our experts at Microsoft. Our journey has produced ready-to-use resources that can accelerate your progress. Examples include our Responsible AI Toolbox for GitHub, hands-on tools for building effective human-AI experiences, and our AI Impact Assessment Template.

“It’s not about how fast you can move, but how prepared you are. Responsible AI processes might seem like speed bumps, but ultimately they’re accelerators.”

Naval Tripathi, principal engineering manager and co-lead, Microsoft Digital Responsible AI Team

Building your capacity to create AI tools responsibly won’t happen without careful planning and strategy. As part of that process, embed responsible AI into your development workflows by emulating the practices we’ve pioneered at Microsoft.

“It’s not about how fast you can move, but how prepared you are,” Tripathi says. “Responsible AI processes might seem like speed bumps, but ultimately they’re accelerators.”

By prioritizing responsible AI, businesses of all kinds, all over the world, can ensure that the AI revolution is a truly human movement.

Key takeaways

These insights can help you as you begin your own journey through responsible AI:

  • Realize that this isn’t just a technical transition. It’s also a gradual evolution and an ongoing journey.
  • Work with people across your organization to establish goals and standards, because different disciplines bring different expertise and insights to the table. This will also align your responsible AI standards with your organizational values.
  • Start with the basics and build from there. Establish principles, create processes, and construct tooling around those structures.
  • A wide array of tooling is readily available in the world of AI. Seek out providers that model responsible values.
  • Lean on your existing experts across privacy, security, accountability, and compliance. Their skills will be crucial in this new technological landscape.
  • Conducting your own responsible AI groundwork is crucial, but you can also partner with Microsoft. We run on trust, and we’ve thought about these issues to pave the way for your success. Follow our lead, consider the best ways to adapt our lessons to your organization, and come to us with questions.

The post Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft appeared first on Inside Track Blog.

]]>
19289
Microsoft CISO advice: Read our four tips for securing your network http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-read-our-four-tips-for-securing-your-network/ Thu, 19 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22779 Geoff Belknap, CVP and operating CISO for Core and Enterprise, shares four key practices your business can use to be prepared for managing network security incidents. Learn from our experience Network isolation (Secure Future Initiative) “Knowing where devices are, who owns them, and what they’re supposed to be doing is pretty important in the middle […]

The post Microsoft CISO advice: Read our four tips for securing your network appeared first on Inside Track Blog.

]]>
Geoff Belknap, CVP and operating CISO for Core and Enterprise, shares four key practices your business can use to be prepared for managing network security incidents.

“Knowing where devices are, who owns them, and what they’re supposed to be doing is pretty important in the middle of an incident,” Belknap says.

Watch this video to see Geoff Belknap discuss how we’re securing our network at Microsoft. (For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=nWPaaTHGE-M.)

Key takeaways

Here are best practices you can use to secure your network:

  • Build a complete inventory. Keep track of what your network devices are, who owns them, and what they do.
  • Capture robust telemetry. Make sure your operational teams have the tools they need to see and analyze access and authentication logs.
  • Use dynamic access control. Manage who can send packets on the corporate network by applying policies.
  • Deprecate old network assets. Cyberattackers know to look for older, unpatched network devices. You can reduce the attack surface by replacing older devices.

The post Microsoft CISO advice: Read our four tips for securing your network appeared first on Inside Track Blog.

]]>
22779
Microsoft CISO advice: Explore our four tips for securing your customer support ecosystem http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-explore-our-four-tips-for-securing-your-customer-support-ecosystem/ Thu, 12 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22635 Microsoft business operations teams know all too well that cyberattackers seek to exploit customer support pathways. Tools that can unlock customer accounts or aid in troubleshooting issues in complex environments are a rich target. “The path attackers really like to use is to compromise support tooling and laterally move to your core tooling,” says Raji […]

The post Microsoft CISO advice: Explore our four tips for securing your customer support ecosystem appeared first on Inside Track Blog.

]]>
Microsoft business operations teams know all too well that cyberattackers seek to exploit customer support pathways. Tools that can unlock customer accounts or aid in troubleshooting issues in complex environments are a rich target.

“The path attackers really like to use is to compromise support tooling and laterally move to your core tooling,” says Raji Dani, Deputy Chief Information Security Officer (CISO) for Microsoft business operations.

Dani and her team focus on understanding and mitigating the risks within customer support operations. In this video, she shares principles and practices for every business that relies on online tools in their customer support ecosystem.

Watch this video to see Raji Dani discuss four customer support ecosystem security principles. (For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=rJ87jjz3vvo .)

Key takeaways

Here are best practices you can apply to your customer support ecosystem:

  • Create dedicated and isolated support identities. Use standardized support identities with phish-resistant multifactor authentication based in a separate identity ecosystem.
  • Implement least privilege and enforce device protection. Only grant the access needed for a given task and nothing more.
  • Ensure tooling does not have high privilege access to customer data. Architect secure tools and manage service-to-service trust and high privileged access.
  • Implement strong telemetry. Anomalous patterns in logs and telemetry data are often the first clue a cyberattack is underway.

The post Microsoft CISO advice: Explore our four tips for securing your customer support ecosystem appeared first on Inside Track Blog.

]]>
22635
Powering the new age of AI-led engineering in IT at Microsoft http://approjects.co.za/?big=insidetrack/blog/powering-the-new-age-of-ai-led-engineering-in-it-at-microsoft/ Thu, 05 Mar 2026 17:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22539 When generative AI burst into the mainstream, it landed in our IT engineering organization like a shockwave. There was excitement, curiosity, skepticism, and no shortage of questions about what this technology meant for the future of IT. At Microsoft Digital—the company’s IT organization—we didn’t start with a grand transformation plan. Instead, we started with a […]

The post Powering the new age of AI-led engineering in IT at Microsoft appeared first on Inside Track Blog.

]]>
When generative AI burst into the mainstream, it landed in our IT engineering organization like a shockwave.

There was excitement, curiosity, skepticism, and no shortage of questions about what this technology meant for the future of IT.

At Microsoft Digital—the company’s IT organization—we didn’t start with a grand transformation plan. Instead, we started with a realization: AI wasn’t just another tool to roll out. It was a fundamental shift in how engineering work could happen.

For years, our IT teams have been focused on scale, reliability, and operational excellence. Those priorities didn’t change. What changed were the possibilities.

Suddenly, engineers could draft code in seconds, summarize complex systems instantly, or automate work that had once consumed hours or days. It was an opportunity to take the skills and capabilities of our people and amplify them with AI.

That realization forced us to step back and ask harder questions.

How do you help thousands of engineers understand what AI can actually do to impact their day-to-day work? How do you move from experimentation to trust? And how do you adopt AI in a way that strengthens engineering fundamentals instead of eroding them?

The answer came in the form of a phased journey grounded in people, culture, and continuous learning.

Phase 1: Awareness and access

It might sound surprising when speaking about engineering processes, but our first challenge wasn’t technology; it was understanding.

When generative AI entered the conversation, most engineers saw the headlines and dabbled in various tools, but few understood fully what it meant for their work. Some were excited, others were wary. Many simply didn’t know where to start. That gap between awareness and practical value was the first barrier we had to address.

We realized early that top-down mandates wouldn’t work. Telling engineers to “use AI” without context or relevance would only deepen skepticism. Instead, we focused on something both simpler and more difficult: Exposure.

We started by making AI visible and accessible in the tools engineers already used. GitHub Copilot. Microsoft 365 Copilot. Early copilots embedded directly into engineering workflows. The goal wasn’t immediate productivity gains. It was familiarity. Letting engineers see, firsthand, what AI could and couldn’t do.

A photo of Singhal.

“We encouraged tool usage and adoption so people would at least play around with AI. And once they did, they started seeing the value. That’s when the mindset shifted from ‘AI might replace me’ to ‘AI can be my companion.’”

Mukul Singhal, partner group engineering manager, Microsoft Digital

Just as important, we talked openly about limitations.

AI wasn’t perfect. It hallucinated. It made confident mistakes. And that honesty mattered. By framing AI as an assistant, we reinforced the role of engineering judgment. Engineers didn’t need to fear losing control. They needed to understand how to stay in control.

We also made experimentation safe.

No quotas. No forced adoption metrics. Engineers were encouraged to try AI on low‑risk tasks: summarizing documentation, generating test cases, or exploring unfamiliar codebases. Small wins built confidence, confidence built curiosity, and curiosity drove organic adoption.

As that experimentation took hold, the mindset began to shift.

“We encouraged tool usage and adoption so people would at least play around with AI,” says Mukul Singhal, a partner group engineering manager in Microsoft Digital. “And once they did, they started seeing the value. That’s when the mindset shifted from ‘AI might replace me’ to ‘AI can be my companion.’”

Over time, conversations changed from ‘Should we use AI?’ to ‘Where does AI help most?’

Engineers began sharing prompts, tips, and lessons learned with one another. What started as individual exploration turned into community learning. Awareness gave way to momentum.

Phase one was about providing access to explore, to question, and to learn. And that foundation made everything that followed possible.

Phase 2: Culture shift

Access created awareness and awareness created curiosity.

As more engineers began experimenting with AI, we noticed a pattern. Some teams were moving faster, learning faster, and reducing friction in their day‑to‑day work. Others stalled after initial trials. The difference wasn’t technical skill or capability, it was mindset.

A photo of Mamilla.

“People started shifting from the mindset of ‘Will AI work?’ to ‘AI is working for me.’ I think that was a very transformational shift, to where I believe a lot of engineers in the organization started believing in AI.”

Veera Mamilla, principal group engineering manager, Microsoft Digital

To move forward, we had to shift how AI was perceived from something optional or experimental to something that was simply part of how modern engineering gets done.

That meant normalizing AI as a trusted partner in the engineering process.

Leaders played a critical role in that shift. Rather than positioning AI as a productivity shortcut, they framed it as a way to strengthen engineering fundamentals: clearer design discussions, better documentation, faster feedback loops, and more time for deep problem‑solving. The message was intentional and consistent. Using AI wasn’t about cutting corners, it was about reimagining how work gets done.

We also had to address a fear that surfaced early: that AI adoption was a signal of replacement rather than empowerment.

“People started shifting from the mindset of ‘Will AI work?’ to ‘AI is working for me,’” says Veera Mamilla, a principal group engineering manager in Microsoft Digital. “I think that was a very transformational shift, to where I believe a lot of engineers in the organization started believing in AI.”

That framing mattered.

As engineers incorporated AI into their workflows, success stopped being measured by output alone. The focus shifted to outcomes. Did AI help you understand a system faster? Did it surface risks earlier? Did it free up time to focus on higher‑value work?

Over time, AI stopped feeling like a novelty. It became part of the engineering fabric. We reinforced it through leadership modeling, peer learning, and shared success stories. Teams no longer asked whether AI belonged in their workflows. They asked how to use it responsibly and effectively.

Phase 3: Upskilling and role evolution

Once AI moved from curiosity to expectation, the challenge of skill building became unavoidable.

From the start, we made a deliberate choice: This would be an upskilling and reskilling journey, not a wholesale replacement of roles. The goal wasn’t a new workforce. It was an investment in the one we had.

That decision shaped everything that followed.

Early upskilling efforts focused on practical entry points. Prompt engineering. Tool literacy. Understanding how copilots and early agents behaved in real engineering workflows. We treated these as something every engineer needed to experiment with, regardless of discipline.

But it quickly became clear that skills alone weren’t the full story. Roles themselves were starting to evolve.

A photo of Singh.

“Your title might still be software engineer or principal engineer. But if you’re acting like an AI engineer, what does that actually mean? That question helped us start defining how these roles were evolving.”

Ragini Singh, partner group engineering manager, Microsoft Digital

Across software development, service engineering, and cloud network engineering, the work was shifting from manual execution toward orchestration and oversight. Engineers were no longer expected to do every task end‑to‑end by hand. Instead, they were learning how to guide AI, review its output, and decide where automation made sense and where it didn’t.

As part of this shift, we began researching how the industry itself was redefining engineering roles. Leaders examined emerging job descriptions from across the market and compared them with Microsoft’s own role frameworks. At the time, there was no formal “AI engineer” role in the internal job library. Rather than creating a new title, the focus stayed on evolving expectations within existing roles.

The idea of an “AI‑native engineer” emerged not as a job description, but as a mindset.

An AI‑native engineer still understands systems, architecture, and risk. What’s different is how that expertise gets applied. Routine tasks are delegated to AI. Judgment, design, and accountability stay with the human. Engineers move from doing all the work themselves to supervising work done in partnership with AI.

“Your title might still be software engineer or principal engineer,” says Ragini Singh, a partner group engineering manager in Microsoft Digital. “But if you’re acting like an AI engineer, what does that actually mean? That question helped us start defining how these roles were evolving.”

This evolution looked different across disciplines. Software engineers focused on AI‑assisted coding, test generation, and spec‑driven development. Service engineers leaned into AI for incident response, knowledge capture, and operational decision support. Cloud network engineers began moving from manual intervention toward intelligent orchestration and agent‑assisted troubleshooting. The common thread wasn’t identical tooling, it was a shared shift toward higher‑order work and reduced toil.

Phase 4: Embedding AI across the engineering lifecycle

By this phase, we knew individual productivity gains were simply the starting point for larger and broader benefits.

Early on, most AI usage showed up in familiar places: Code suggestions, documentation summaries, quick answers. Useful, but fragmented. The bigger opportunity emerged when we stepped back and asked a harder question: What would it look like if AI were embedded across the entire engineering lifecycle, not just used at isolated moments?

We stopped thinking in terms of tools and started thinking in terms of flow. Design. Build. Test. Deploy. Operate. Improve. AI needed to show up across all of it, in ways that reinforced how engineers already worked.

A photo of Sadasivuni.

“If AI is only showing up at one step, you don’t get the full value. The real impact comes when it’s integrated across the lifecycle, where engineers can design, build, operate, and learn faster as a system.”

Sudhakar Sadasivuni, principal group engineering manager, Microsoft Digital

In software engineering, that meant pulling AI earlier into the process. We began using it to help draft requirements, reason through design options, and review code with broader system context to accelerate how quickly we could get to informed decisions. Coding assistance mattered, but it was no longer the center of gravity.

Testing and quality followed a similar pattern. AI supported test generation, defect analysis, and code review, reducing repetitive effort and helping issues surface sooner. That gave engineers more time to focus on quality and architecture instead of cleanup.

In service engineering, we embedded AI into incident management and operational workflows. Engineers used it to summarize incidents, surface relevant knowledge, and analyze signals across systems. In cloud network engineering, AI helped shift work away from manual intervention toward orchestration and intelligent troubleshooting. Across disciplines, the principle stayed the same: AI should reduce friction, not introduce it.

As we scaled this approach, one thing became clear. Embedding AI wasn’t just a technical exercise. It was a systems change.

“If AI is only showing up at one step, you don’t get the full value,” says Sudhakar Sadasivuni, a principal group engineering manager in Microsoft Digital. “The real impact comes when it’s integrated across the lifecycle, where engineers can design, build, operate, and learn faster as a system.”

As AI became part of core workflows, engineers remained accountable for outcomes. AI output was reviewed, tested, and validated like any other engineering input. Embedding AI didn’t lower the bar for rigor. It raised expectations around judgment, oversight, and data quality. We became more deliberate about responsibility and governance.

Over time, these integrations created compound benefits.

Faster design cycles reduced downstream rework. Better testing lowered operational noise. Improved operational insight shortened recovery times. AI stopped being something we used occasionally and became something the engineering system itself was built around.

Phase 5: Eliminating toil and accelerating outcomes

At some point, every AI story hits the same test. Does it actually make engineers’ days better? For us, that proof showed up fastest in elimination of toil.

Across Microsoft Digital, engineers have always spent time on work that was necessary but draining. It included tasks such as manual troubleshooting, repetitive diagnostics, log analysis, and routine operational tasks that kept systems running but didn’t move the organization forward.

AI gave us a chance to change that.

A photo of Garrison.

“Toil reduction is the biggest thing. That’s where engineers’ eyes light up. If we can eliminate toil, people engineers will flock to use AI. I really believe it.”

Beth Garrison, principal cloud network engineer, Microsoft Digital

In cloud network engineering, for example, troubleshooting used to require manually reconstructing what happened, such as logging into devices, chasing configurations, and piecing together context after the fact. As we began introducing agents and machine learning into these workflows, that work shifted. Instead of spending time assembling the picture, engineers could generate the views they needed faster and focus on resolving issues.

The same shift showed up in how we used operational data.

Rather than reacting to incidents after impact, we started using machine learning to analyze logs, identify patterns, and surface anomalies earlier. That moved teams from reactive response toward proactive monitoring and prevention.

One thing became clear very quickly: Toil reduction wasn’t just a benefit; it was the catalyst for adoption.

“Toil reduction is the biggest thing. That’s where engineers’ eyes light up,” says Beth Garrison, a principal cloud network engineer at Microsoft Digital. “If we can eliminate toil, people engineers will flock to use AI. I really believe it.”

Service engineering followed a similar arc.

Across governance, operations, productivity, and cost management, we began applying agents and automation to simplify complex work and reduce manual review cycles. Governance and compliance workflows became faster and more consistent. Operational processes benefited from guided remediation and earlier insight. Knowledge capture improved as documentation and remediation guidance could be generated and updated automatically.

When we removed repetitive work such as manual triage, rote diagnostics, endless documentation cleanup, we transformed how engineers spent their time. More focus on design. More proactive problem‑solving. More energy directed toward improving systems instead of just maintaining them.

Toil reduction made the value of AI tangible. It’s the moment AI stopped being interesting and became indispensable, and our engineering teams started asking where else we can apply it next.

Measuring what matters

By the time AI was embedded across our engineering lifecycle, a new question came into focus: “How do we know it’s working?”

In the early days, we paid close attention to usage. Which tools engineers were trying, where adoption was growing, or where it stalled. Those signals mattered and adoption was the leading indicator that people were getting comfortable and starting to integrate AI into real work.

“Adoption was always the starting point. But we were clear from the beginning that usage isn’t the destination. The real goal is impact; more time for engineers to focus on the work that truly matters.”

Ullas Kumble, principal group software engineering manager, Microsoft Digital

But using AI doesn’t automatically mean better outcomes. So, we shifted the conversation and started asking, “What’s different now that our engineers are using AI?”

That change reframed how we thought about measurement. We began looking beyond tool activity to understand impact across the engineering system. Faster design cycles. Earlier defect detection. Reduced time spent on repetitive operational work. Shorter incident resolution. Clearer documentation. Fewer handoffs. Less rework.

These weren’t abstract metrics. They showed up in the flow of work.

We were intentional about not forcing a single definition of value across every role. Software engineers, service engineers, and cloud network engineers experience impact differently. What mattered was that each team could point to tangible improvements in how work moved through the system.

That perspective shaped how leadership talked about success.

“Adoption was always the starting point,” says Ullas Kumble, a principal group software engineering manager at Microsoft Digital. “But we were clear from the beginning that usage isn’t the destination. The real goal is impact; more time for engineers to focus on the work that truly matters.”

Over time, this approach changed the quality of our conversations. Instead of debating whether AI was worth the investment, teams talked about where it was removing friction and where it still wasn’t delivering enough value. Measurement became a tool for learning and prioritization.

Moving forward

Looking ahead, one lesson stands out: this journey isn’t complete.

AI tools will continue to evolve. Agents will become more capable. Roles will keep shifting. What it means to be an engineer will continue to change. And that means our approach must stay grounded in the same principles that guided us from the start: invest in people, reinforce fundamentals, embed AI into real workflows, and stay honest about what’s working and what isn’t.

We didn’t set out to build an AI‑driven engineering organization overnight, we built it phase by phase.

By meeting engineers where they were
By reshaping culture before redefining roles.
By embedding AI across the lifecycle, not bolting it on.
By reducing toil and measuring impact where it mattered most.

The result is better engineering: powered by AI, guided by human judgment, and built to keep evolving.

Key takeaways

Here’s a set of approaches you can take to establish AI-led engineering for your organization:

  • Start with access and understanding. Give engineers safe, easy access to AI in the tools they already use so curiosity and confidence can develop organically before you push for outcomes.
  • Frame AI as a partner, not a replacement. Position AI as an assistant that strengthens engineering judgment and fundamentals rather than a shortcut or a threat to roles.
  • Normalize experimentation without pressure. Encourage low‑risk experimentation and peer sharing instead of mandates, allowing adoption to grow through visible, practical wins.
  • Invest in upskilling. Focus on evolving skills and expectations within existing roles so engineers learn how to guide, review, and stay accountable for AI‑assisted work.
  • Embed AI across the full engineering lifecycle. Look beyond isolated productivity gains and integrate AI into design, build, test, operate, and improve workflows to unlock system‑level impact.
  • Measure impact where engineers feel it. Move past usage metrics and track outcomes like reduced toil, faster feedback, and improved flow so teams can see where AI is truly making work better.

Try it out

Try GitHub Copilot.

The post Powering the new age of AI-led engineering in IT at Microsoft appeared first on Inside Track Blog.

]]>
22539
Protecting AI conversations at Microsoft with Model Context Protocol security and governance http://approjects.co.za/?big=insidetrack/blog/protecting-ai-conversations-at-microsoft-with-model-context-protocol-security-and-governance/ Thu, 12 Feb 2026 17:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22324 When we gave our Microsoft 365 Copilot agents a simple way to connect to tools and data with Model Context Protocol (MCP), the work spoke for itself. Answers got sharper. Delivery sped up. New patterns of development emerged across teams working with Copilot agents. That ease of communication, however, comes with a responsibility: Protect the […]

The post Protecting AI conversations at Microsoft with Model Context Protocol security and governance appeared first on Inside Track Blog.

]]>
When we gave our Microsoft 365 Copilot agents a simple way to connect to tools and data with Model Context Protocol (MCP), the work spoke for itself.

Answers got sharper. Delivery sped up. New patterns of development emerged across teams working with Copilot agents.

That ease of communication, however, comes with a responsibility: Protect the conversation.

Questions came up like, who’s allowed to speak? What can they say? And what should never leave the room?

Microsoft Digital, the company’s IT organization, and the Chief Information Security Officer (CISO) team, our internal security organization, are leaning on those questions to help us shape our strategy and tooling around MCP internally at Microsoft.

A photo of Kumar.

“With MCP, the problem is not the inherent design; it’s that every improper server implementation becomes a potential vulnerability. Even one misconfigured server can give the AI the keys to your data.”

Swetha Kumar, security assurance engineer, Microsoft CISO

Our approach is intentionally straightforward.

Start secure by default. Use trusted servers. Keep a living catalog so we always know which voices are in the room. Shape how agents communicate by requiring consent before making changes.

We minimize what’s shared outside our walls, watch for drift, and act when something looks off. Our goal is practical governance that lets builders move fast while keeping our data safe.

That’s the risk we design for, and it’s why our controls prioritize clear ownership, simple choices, and visible guardrails.

“With MCP, the problem is not the inherent design; it’s that every improper server implementation becomes a potential vulnerability,” says Swetha Kumar, a security assurance engineer in the Microsoft CISO organization. “Even one misconfigured server can give the AI the keys to your data.”

Understanding MCP and the need for security

MCP is a simple standard that lets AI systems “talk” to the right tools and data without custom integration work. Think of it like USB‑C for AI. Instead of building a new connection every time, teams plug into a common pattern. That standardization delivers speed and flexibility—but it also changes the security equation.

Before MCP, every integration was its own isolated conversation.

“Now, one pattern can unlock many systems,” Kumar says. “It’s a win and a risk. When AI can reach more systems with less effort, we must be precise about who’s allowed to speak, what they can say, and how much gets shared.”

We frame this as communications security.

The question isn’t just, “Is this API secure?” It’s “Is this a conversation we trust?” We want to know which servers are in the room, what actions they’re permitted to take, and how we’ll notice if something changes. At the same time, we keep the cognitive load low for builders. They choose from trusted options, see clear prompts before an agent makes edits, and move on. Simple choices lead to safer outcomes.

“MCP enables granular control over the tools and resources exposed to the Large Language Model,” Kumar says. “But that means the developer is responsible for configuring it correctly—which tools an agent can see, what actions a server can take, and what context is shared.”

This approach helps both sides.

Product teams get a consistent way to extend their agents while security teams get consistent places to add guardrails—at discovery, access, and throughout the flow of requests and responses. Everyone operates from the same playbook.

When we treat MCP this way, we protect the conversation without slowing it down. We know who’s speaking. We know what they can do. And we can prove it.

Assessing MCP security across four layers

Every MCP session creates a conversation graph. An agent discovers a server, ingests its tool descriptions, adds credentials and context, and starts sending requests. Each step—metadata, identity, content, and code—introduces potential risk.

We evaluate those risks across four layers so we can catch failures early, contain blast radius, and keep conversations in bounds.

However, the big picture is just as important as the details.

“We take a holistic view of MCP security: start with the ecosystem, then specify controls across the four layers,” Kumar says. “The layers make the work concrete, but the goal stays the same—unified governance, shared education, and faster detect-and-mitigate when a server is at risk.”

Applications and agents layer

This is where user intent meets execution. Agents parse prompts, discover tools, select actions, and request changes. MCP clients live here, deciding which servers to trust and when to ask for user consent.

  • What can go wrong
    • Tool poisoning or shadowing. A server advertises safe‑looking actions but performs something else.
    • Silent swaps. A tool’s metadata changes and the client keeps trusting an altered “voice.”
    • No sandbox. The agent can request edits or run code without strong guardrails.
  • What we watch for
    • Unexpected tool descriptions or capabilities at connect time.
    • Edit attempts on critical resources without explicit user consent.
    • Abnormal tool‑selection patterns across sessions.

AI platform layer

The AI platform layer includes the AI models and runtimes that interpret prompts and call tools, along with orchestration logic and safety features.

  • What can go wrong
    • Model supply‑chain drift. Unvetted models, unsafe updates, or compromised fine‑tunes change behavior.
    • Prompt injection via tool text. Descriptions and responses steer the model toward unsafe actions.
  • What we watch for
    • Model provenance and update cadence tied to agent behavior changes.
    • Signals of jailbreaks or instruction overrides in prompts and intermediate messages.
    • Output drift linked to specific tools or servers.

Data layer

This layer covers business data, files, and secrets the conversation can touch.

  • What can go wrong
    • Context oversharing. Session data, files, or secrets get packed into the model’s context and leak to a third‑party server.
    • Over‑scoped credentials. Long‑lived tokens, broad scopes, or wrong audience claims enable lateral movement.
  • What we watch for
    • Size and sensitivity of context passed to tools.
    • Token hygiene, including short lifetimes, least‑privilege scopes, and correct audience claims.
    • Data egress patterns that don’t match a tool’s declared purpose.

Infrastructure layer

The infrastructure layer includes compute, network, and runtime environments.

  • What can go wrong
    • Local servers with too much reach. Excessive access to environment variables, file systems, or system processes.
    • Cloud endpoints without a gateway. No TLS enforcement, rate limiting, or centralized logging.
    • Open egress. Servers call out to the internet where they shouldn’t.
  • What we watch for
    • All remote MCP servers registered behind the API gateway.
    • Runtime signals, such as authentication failures, burst traffic, or unusual geographies.
    • Network policies that restrict outbound calls to certain targets.

Across all four layers, the throughline is AI communications security. We decide who can speak and verify what was said—and keep listening for change.

Establishing a secure-by-default strategy

We start by closing the front door. We recommend every remote MCP server sits behind our API gateway, giving us a single place to authenticate, authorize, rate‑limit, and log. There are no direct calls and no blind spots.

A photo of Enjeti

“Everything we do starts with securing the MCP server by default and that begins by registering it in API Center for easier discovery. We rely solely on vetted and attested MCP servers, ensuring every call comes from a trusted footprint.”

Prathiba Enjeti, principal PM manager, Microsoft CISO

Next, we decide who gets a voice.

Teams choose from a vetted list of MCP servers. If someone connects to an unapproved endpoint, they receive a friendly nudge and a clear path to register it. No shaming—just fast correction and a better inventory the next time around.

Identity comes next. Servers expect short‑lived, least‑privilege tokens with the right scopes and audience. Admin paths require strong authentication, and where possible, we use proof‑of‑possession to bind tokens to the client and reduce replay risk. Secrets don’t live in code, keys rotate, and audit trails are in place.

“Everything we do starts with making the MCP server secure by default and that begins by registering it in API Center for easier discovery,” says Prathiba Enjeti, a principal product manager in the Microsoft CISO organization. “We only use vetted and attested MCP servers. That’s how we keep the conversation safe without slowing it down.“

On the client side, we slow agents at the right moments. Agents can’t touch high‑risk tools without explicit consent. Tool descriptions are verified on connection and compared to approved contracts. If a tool’s “voice” drifts, we block the call.

We also minimize what’s shared.

Context is trimmed to what the task requires. Sensitive data isn’t included by default, and third‑party servers get only what they need—not the whole transcript. Output filters and prompt shields sit alongside the model to prevent risky inputs from becoming risky actions.

Isolation completes the design. Local servers run in containers with tight file and network permissions. Hosted servers allow only the outbound calls they need, and inbound traffic flows through the gateway, with TLS and logging enforced.

Simple rules with visible guardrails.

“We only use vetted MCP servers,” Enjeti says. “That’s how we keep the conversation safe without slowing it down.”

How we run MCP at scale: architecture, vetting, and inventory

We keep MCP safe by making three things intentionally boring: architecture, vetting, and inventory. One defined path. One vetting flow. One living catalog.

Architecture

We recommend remote MCP servers sit behind an API gateway, giving us a single place to authenticate, authorize, validate, rate‑limit, and log. Transport Layer Security (TLS) is required by default, and for sensitive endpoints, we can require mutual TLS. Outbound egress is pinned to approved destinations using private endpoints and firewall rules, so servers can’t “call anywhere.” Runtime protection continuously watches for credential abuse, injection patterns, burst traffic, and odd geographies.

Identity is established up front. We issue short‑lived, least‑privilege tokens with the correct audience and scopes, and admin paths require strong authentication. Where supported, tokens are bound to the client to reduce replay risk. Services use managed identities or signed credentials; secrets don’t live in code, and keys rotate on schedule.

Model‑side safety travels with every conversation. Content safety and prompt shields help models ignore risky inputs, while orchestration enforces a per‑tool allowlist, so an agent can’t call tools that aren’t in policy—even if the model suggests it. We also track model versions, allowing behavior changes to be correlated with updates.

Clients enforce consent at the edge. “Ask before edits” is enabled by default for write, delete, and configuration changes. When an agent connects, it verifies tool descriptions against the approved contract.

Observability ties it all together. We’re working toward logging tool calls, resource access, and authorization decisions end‑to‑end with correlation IDs. Detections flag abnormal tool selection, unexpected data egress, or edits without consent. Every server has an owner, a contract, and an approval record, and metadata changes automatically trigger re‑review. Kill switches live at both the client and the gateway when we need them.

Vetting

We don’t “connect and hope.”

Before any MCP server can speak in our environment, it earns trust. Owners declare what the server does (tools and actions), what it touches (data categories and exports), how callers authenticate (scopes and audience), and where it runs (runtime and on‑call ownership).

We start with static checks: manifests must match the contract, side‑effecting actions must be consent‑gated, tokens must be short‑lived and properly scoped. A SBOM (Software Bill of Materials) must be present, dependencies must be current, and no credentials can be embedded in code.

Then we test like a client would. We snapshot tool metadata on connect and compare it to the approved contract, probe for prompt‑injection and tool‑poisoning, and verify that “ask before edits” triggers for destructive actions.

We also confirm context minimization, validate that egress is pinned to approved hosts, and test resilience under load, including health checks, retry behavior, and isolation using containers with least‑privilege file and network access. Servers are published only when security, privacy, and responsible AI reviews are complete, runbooks and on‑call are in place, and the registry entry is created and pinned.

Inventory

A photo of Janardhanan

“Inventory is the foundation—if we miss a server, we miss the conversation. Every server, regardless of where it’s running or how it’s deployed, must be accounted for in our system.”

Priya Janardhanan, principal security assurance engineering manager, Microsoft CISO

You can’t govern what you can’t see, and MCP shows up in more places than a single system of record. To solve that, we’re building the map from signals and stitch them into one catalog.

“Inventory is the foundation—if we miss a server, we miss the conversation,” says Priya Janardhanan, a principal security assurance engineering manager at Microsoft CISO Operations. “Every server, regardless of where it’s running or how it’s deployed, must be accounted for in our system. Without a complete inventory, we lose visibility into critical operations, risk exposing sensitive data, and undermine our ability to ensure compliance and security.”

Our goal state is that Endpoint telemetry catches developer‑run servers on laptops and workstations. Repos and CI pipelines reveal intent before anything ships. IDEs (Integrated Development Environments) surface local extensions and configured endpoints. The gateway and our registries anchor what’s approved for business data, while low‑code environments tell us which connectors are in use and where they point.

We normalize and correlate those signals with stable IDs for servers, tools, and owners. Ownership is proven through repositories, gateway services, and environment administrators—on‑call contacts included. Exposure is scored based on data touches, scopes requested, egress rules, and change history, so high‑risk items rise to the top of the queue.

Freshness is tracked with last‑seen timestamps, and stale entries are retired over time. Builders can discover and reuse approved servers; reviewers can see what changed since the last approval, and admins get instant visibility into coverage and hotspots.

We’re working toward automated identification and notification for unknow servers. In the ideal state, a registration stub is created when we detect an unknown server on an endpoint. Then, the likely owner is notified, and direct calls are blocked until the server is vetted through an automated process. If tool metadata changes after approval, high-risk actions are paused and routed for re-review, then auto-resumed once approved.

“It all revolves around inventory as the foundation,” Janardhanan says. “If we miss a server, we miss the conversation.”

A photo of Hasan

“Agent 365 tooling servers will allow centralized governance for IT admins. That means a single pane where they can see what’s approved, who owns it, what data it touches, and then apply policy.”

Aisha Hasan, principal product manager, Microsoft Digital

Architecture gives us stable choke points. Vetting keeps weak servers out. Inventory keeps our map current. It’s a single pattern for builders and a unified playbook for security.

Governing agents in low‑code and pro-code scenarios

Makers move fast—that’s the point. A Customer Support team needed a Copilot action to pull case history, so they opened Copilot Studio, selected an approved MCP connector, and shipped a first version before lunch. No tickets. No detours. Governance showed up in the flow, not as a blocker.

“Agent 365 tooling servers will allow centralized governance for IT admins,” says Aisha Hasan, a principal product manager at Microsoft Digital. “That means a single pane where they can see what’s approved, who owns it, what data it touches, and then apply policy. We’re moving toward that consolidation so innovation continues while governance gets simpler and more consistent.”

We place guardrails where makers already work. In Copilot Studio, trusted and verified first-party MCP servers are allowed in developer environments to accelerate innovation and encourage experimentation. Riskier or complex MCP integration is available in Copilot Studio custom environments and other pro-code tools such as Microsoft 365 Agent Tool kit in VS Code and Microsoft Foundry, but only with clear checks: service ownership, security and privacy review, responsible AI assessment, and consent gating for high‑impact actions.

The allowlist is our north star.

Approved MCP servers and connectors live in one catalog with documented owners, scopes, and data boundaries. Makers choose from that shelf. If an MCP server uses an unverified tool, we enforce endpoint filtering. If there is misconfiguration, we open a task for the owner and help them build securely.

Permissions stay tight without adding cognitive load. Tokens are short‑lived and scoped to the task. Context is trimmed so only the necessary fields flow to the tool. Third‑party servers never get the full transcript. If a connector’s capabilities change, the runtime compares the new “voice” to what we approved. MCP Clients should pause risky actions, notify the owner, and resume automatically once reviewed.

With agent inventory in Power Platform Admin Center and registry in Agent 365, admins get a clean view on which connectors are active, who owns them, what data they touch, and how often they’re called. Organization policies such as DLP and MIP can be enforced in a unified way , with a re‑review when capabilities change. The goal is simple: let builders innovate confidently and securely while maintaining security and compliance.

“MCP servers are powerful AI tools that enable agents to seamlessly integrate and interact with enterprise data and transform business workflows,” Hasan says. “That means the same enterprise data and governance principles are applied equally to MCP servers and other connectors. A robust inventory, an agile policy framework, and an automated workflow for enforcement are cornerstones for successfully governing agents at scale.”

Securing MCP at scale: Operating, monitoring, and enabling

Our work doesn’t stop at go‑live. Once an MCP server is in the catalog, we operate the conversation like a service: measurable, observable, and responsive. Identity and policy guard the front door, but runtime is where we prove the controls work without slowing anyone down.

In practice, operating MCP at scale comes down to four motions:

Observe every tool call end to end. We make the flow observable. Every tool call carries a correlation ID from client to gateway to server and back. Prompts, tool selections, authorization decisions, and resource access should belogged with consistent schemas. Golden signals—latency, errors, saturation—sit alongside safety signals like unexpected egress or edits without consent. Owners and security teams see the same dashboards.

Detect drift and abnormal behavior early. Detection lives close to the work. We flag abnormal tool patterns, spikes in write operations, burst traffic from new geographies, and context sizes that don’t fit a task. We continuously compare a tool’s “voice” at connect time to the approved version; drift automatically pauses risky actions and pings the owner. Cost controls double as guardrails, using rate limits and budgets to cap blast radius and surface runaway loops early.

Respond with precision instead of blunt shutdowns. Response is graded, not binary. We can block destructive actions and allow reads, or throttle a noisy client without killing the session. Kill switches exist at both the client and the gateway. Playbooks are pre‑approved and integrated into the consoles owners already use, and dry runs are part of muscle memory, so the first switch flip doesn’t happen during an incident.

We treat model behavior as part of operations. Content safety and prompt shields run in production, not just in tests. We pin model versions and watch for output drift after updates. If a model starts suggesting tools out of character, the owner gets paged with the exact prompts and calls that triggered it.

Telemetry respects privacy. Logs avoid sensitive payloads by default and mask what must pass through for forensics. Access is role‑based, retention follows policy, and audit readiness is designed in on day one.

Enable builders through templates, education, and reuse. Adoption and education run in parallel. Builders get templates that enable best practices: sample manifests with consent gates, CI checks for token scope and SBOMs, and gateway stubs with sane defaults. A “ten‑minute preflight” runs locally to verify contracts, test consent flows, and check egress before a pull request is opened. IDE lint rules catch common issues early.

“This is how we operate MCP at scale,” says Janardhanan. “Observe the conversation, detect drift early, respond with precision, and teach habits that make the right path the easy path. We run it like a product because that’s what it is.”

Measuring results and moving forward

This program has changed how we build. Reviews move faster because every server follows the same path. Drift is caught early because clients compare a tool’s “voice” on connection. Shadow servers decline as inventory fills in from endpoint, repo, IDE, and gateway signals. Reuse increases because teams can discover trusted servers instead of creating new ones. Incidents resolve faster with correlation IDs across the conversation and kill switches at both the client and the gateway.

It’s also changed how our admins work. One gateway means one perimeter to manage. Policies land once and apply everywhere. Owners see the same telemetry security sees, so fixes happen where the work happens.

Going forward, we’re focused on more consolidation and automation. We’re moving toward a single pane for MCP governance—approve, monitor, and pause from one place. Policy-as-code will keep allowlists, consent rules, and egress boundaries versioned and testable in CI.

Our preflight checks will get smarter, with stronger injection tests, automatic egress validation, and environment‑aware templates. We’ll expand consent patterns so high‑impact actions remain explicit and auditable, even across multi‑tool chains. And we’ll keep shrinking re‑review time, so drift is measured in minutes, not days.

AI conversations are now part of how we build every day. MCP standardizes how agents talk to tools and data. Secure‑by‑default architecture, rigorous vetting, and a living inventory, ensure the right voices stay in the room, only what’s needed is shared, and drift is caught early.

The result is simple: teams ship faster with fewer surprises, and governance stays visible without getting in the way. We’ll keep tightening the loop, so saying yes remains both easy and safe.

Key takeaways

If you’re implementing MCP security, consider these key actions to ensure secure, efficient adoption in your organization:

  • Build governance into the maker flow. Embed security, consent, and responsible AI checks directly where teams build—so protection shows up by default, not as an afterthought.
  • Maintain a single allowlist and catalog. Centralize approved MCP servers and connectors with clear ownership, scope, and data boundaries.
  • Enforce scoped, short-lived permissions by default. Automatically limit token scope and duration to minimize risk and exposure.
  • Monitor continuously and detect drift early. Observe activity, flag deviations, and pause risky actions until reviewed and approved by owners.
  • Automate incident response and controls. Leverage pre-approved playbooks, kill switches, and rate limits for fast, precise action.
  • Design for privacy and auditability from day one. Mask sensitive data, restrict log access by role, and endure audit readiness.
  • Promote education and reuse. Provide templates, training, and feedback loops to encourage safe development and adoption of trusted servers.

The post Protecting AI conversations at Microsoft with Model Context Protocol security and governance appeared first on Inside Track Blog.

]]>
22324
Powering data governance at Microsoft with Purview Unified Catalog http://approjects.co.za/?big=insidetrack/blog/powering-data-governance-at-microsoft-with-purview-unified-catalog/ Thu, 05 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22272 Data fuels everything that we do here at Microsoft, from the daily operations that keep the business running to the innovations that shape the future. But as data sprawls across teams, systems, and borders, the task of ensuring that it remains secure, accurate, and well-governed is a daunting one. A sound approach to data governance […]

The post Powering data governance at Microsoft with Purview Unified Catalog appeared first on Inside Track Blog.

]]>
Data fuels everything that we do here at Microsoft, from the daily operations that keep the business running to the innovations that shape the future.

But as data sprawls across teams, systems, and borders, the task of ensuring that it remains secure, accurate, and well-governed is a daunting one. A sound approach to data governance is the backbone of responsible data use across the enterprise, creating clarity around data ownership and access.

In an organization the size of Microsoft, no single team can carry this responsibility on its own. Effective data governance must be a distributed effort across all departments and functions.

This story explains how our marketing organization uses the Microsoft Purview Unified Catalog to organize and standardize the data we rely on daily. By putting clear ownership, consistent definitions, and reliable governance in place, we’re turning fragmented, unreliable data into an advantage that supports faster decisions and more effective campaigns.

Data governance at scale

As companies grow, their data governance becomes increasingly complex, with different teams creating their own versions of key data concepts, often without realizing it. The complexity is most visible in the way users across an organization define foundational terms.

A photo of Doughty.

“We found adoption to be much easier when helping teams focus on building more value in their data instead of driving governance like a compliance effort.”

Nick Doughty, senior product manager, Microsoft Purview Unified Catalog

Examples in marketing include what counts as a customer (active vs. inactive, marketing- or sales-qualified), what constitutes sensitive data (personally identifiable information, behavioral data, partner data), and what a metric means (conversion, engagement, attribution windows).

When inconsistent practices take hold, ownership becomes murky. With the increasing demands that managing data quality and integrity put on our leaders and their teams, effective data governance becomes one more hurdle to productivity.

“We started off implementing data governance like an issue register,” says Nick Doughty, a senior product manager within Microsoft Purview Unified Catalog. “Then we progressed to more of an enforcement method, similar to how we were doing security at the time. We found that when we started to push really hard on teams, similar to how we drove other compliance efforts, it was difficult for them to justify or understand why they would want the added governance.”

The introduction of Microsoft Azure Purview in 2020 marked a turning point.

A united platform for data governance, security, and compliance, Purview helps organizations understand, protect, and manage data across environments. It also addresses fragmented data, lack of visibility into where sensitive data lives and how it moves, compliance complexity with regulations (including GDPR and HIPAA), and security risks.

A photo of Mathur

“Our marketing teams used to spend hours hunting for the right customer list because multiple versions lived in different locations, each with unclear owners and inconsistent labels. Now our marketers can trust they are working from current information, while avoiding compliance risks associated with incorrect or unauthorized data.”

Sourabh Mathur, principal engineering lead, Global Marketing Engines and Experiences

The Purview Unified Catalog serves as the AI-powered backbone, automatically discovering, classifying, and organizing information so users can easily find and trust the data they need.

By launching the unified catalog, we gave our users a consistent way to understand and use their data, while reinforcing strong governance and compliance practices. The result is data that’s more discoverable, reliable, and actionable. (The product was renamed Microsoft Purview in 2022 and became part of Microsoft 365 compliance tools.)

“Our marketing teams used to spend hours hunting for the right customer list because multiple versions lived in different locations, each with unclear owners and inconsistent labels,” says Sourabh Mathur, a principal engineering lead in Global Marketing Engines and Experiences, who helped set up Purview for our marketing organization.

With the unified catalog in place, Purview surfaces the dataset, shows its lineage, and applies the correct sensitivity classifications.

“Now our marketers can trust they are working from current information, while avoiding compliance risks associated with incorrect or unauthorized customer data,” Mathur says.

Powering marketing at Microsoft with Purview

With more than 200 Microsoft Azure subscriptions, our marketing organization manages one of the largest data estates at the company. The team faces the constant challenge of scattered data, unclear data ownership, and inconsistent governance practices that slow down campaigns and increase compliance risk.

A photo of Biswal.

“Marketing can now scale governance across hundreds of data products, support self-service data collection with guardrails, automate access decisions, and enable AI workloads on trusted data.”

Deepak Kumar Biswal, principal software engineering lead, Global Marketing Engines and Experiences

By adopting Purview, our marketing team gained unified visibility, clearer classification standards, and smoother collaboration with other departments, like IT and legal. This reduces friction while strengthening data protection.

The result is an organization that moves faster with greater confidence in how it handles customer and campaign data.

Instead of relying on legacy knowledge, forcing users to dig through different servers and SharePoint sites, or constantly sending queries to the engineering teams, our marketing professionals can now explore the curated Purview Unified Catalog, making streamlined, efficient data discovery possible.

“Marketing can now scale governance across hundreds of data products, support self-service data collection with guardrails, automate access decisions, and enable AI workloads on trusted data,” says Deepak Kumar Biswal, a principal software engineering lead in Global Marketing Engines and Experiences. “Purview turns responsible data use into everyday practice, not extra work.”

Data governance and security: Two sides of the same coin

For our marketing organization, data governance and security are inseparable concepts. As soon as you have customer information, you need to make sure it’s secure—sensitive data must be carefully defined, consistently managed, and protected from misuse or breach.

Purview supports this goal by combining governance capabilities with security and compliance controls that provide added layers of protection.

Within marketing, the governance and security teams work closely together. Good governance measures ensure our data is properly defined and standardized, while strong security policies ensure it’s handled with proper safeguards. By pairing governance with strong security practices, our marketing team can remain compliant with data privacy laws, prevent misuse of sensitive information, and foster trust across their organization.

When our marketing team began its Purview journey five years ago, it adopted a centralized governance model. Much like the structure of a government—where federal, state, and local entities each play a role—our approach allows both centralized standards and local autonomy. This creates consistency across the organization without stifling agility.

Our Data Governance team took on the role of steward, defining standards, onboarding systems, and collaborating with its IT partners to connect data environments. Existing assets like data dictionaries and process flows were used to seed the catalog, ensuring the team started from known ground rather than reinventing definitions from scratch.

This deliberate, incremental approach allowed our marketing team to thoughtfully build out healthy governance practices. By moving slowly, the team learned from each step on its journey, refining processes and establishing consistent practices as it moved along.

For example, working closely with our team in Microsoft Digital allowed them to experiment with different ways of discovering and cataloging their data. This involved taking learn and refine how Purview tuned their data before they rolled anything out broadly.

Our goal is to transition to a completely federated model in which responsibility shifts outward. Rather than the marketing governance team doing all the stewardship, individual groups will take ownership of their data within Purview. This shift distributes accountability, embeds governance deeper into daily operations, and makes it easier for teams to monitor data quality and enforce standards on their own.

Impact across the enterprise

Since adopting Purview Unified Catalog, we’ve seen tangible results across our data estate and our data governance practices in marketing and across all verticals within the company. Here are some companywide highlights:

  • Better consolidation: We’ve unified five catalogs into one.
  • Increased scale: We added 250 data sources onboarded in six months, representing roughly 10 million assets.
  • Higher internal adoption: We set up more than 50 governance domains, an effort we supported with reusable training assets, guides, and onboarding materials.

The benefits also include and extend beyond marketing:

  • Teams across the company are gaining increased confidence in their data definitions.
  • Compliance and privacy obligations are being met more effectively.
  • Business value is being generated through better, more trusted use of data.
  • Organizations are benefiting from faster time-to-insight.

Launching the marketing governance domain

We’re using Purview to combine essential capabilities like data governance, classification, and quality checks across our Microsoft services, which creates a unified foundation for our enterprise-wide metadata management. These unified capabilities make Purview an indispensable tool for us, and for large-scale enterprises.

A photo of Singh

“With various role types like data curator and data reader, we can add more visibility into our data—where it lives, how it’s being used, and who are its primary owners. Clearly defining these parameters helps us use the data governance framework as a starting point and improve our data governance capabilities.”

Vinny Singh, principal program manager, Global Marketing Engines and Experiences

As early adopters of Purview Unified Catalog, the group launched the Marketing Governance domain, registering more than 200 data products using the Unified Catalog’s data map.

The products, spanning various datasets, are aligned with strict internal governance standards. This gives marketing the ability to govern, classify, and track data across its ecosystem—ensuring adherence to GDPR and other regulatory compliance measures.

“With various role types like data curator and data reader, we can add more visibility into our data—where it lives, how it’s being used, and who are its primary owners,” says Vinny Singh, a principal program manager in Global Marketing Engines and Experiences. “Clearly defining these parameters helps us use the data governance framework as a starting point and improve our data governance capabilities.”

Key takeaways

Our journey with Microsoft Purview Unified Catalog has generated key insights that you can apply to your own data governance efforts. These include:

  • Start small: Don’t try to “boil the ocean.” Begin with three to five governance domains and scale from there.
  • Leverage what you have: Data dictionaries, glossaries, and existing documentation provide a strong starting point for a governance platform founded on the Purview Unified Catalog.
  • Focus on value, not enforcement: Governance resonates when teams see how it helps them, not when it’s mandated.
  • Adapt to your organization: Each team at your company will use Purview differently. Flexibility helps encourage adoption.
  • Build community: Data governance is not a solo effort. Collaboration among stakeholders produces stronger standards and better results.

The post Powering data governance at Microsoft with Purview Unified Catalog appeared first on Inside Track Blog.

]]>
22272
Microsoft 365 Copilot for executives: Sharing our deployment and adoption journey at Microsoft http://approjects.co.za/?big=insidetrack/blog/microsoft-365-copilot-for-executives-sharing-our-deployment-and-adoption-journey-at-microsoft/ Thu, 29 Jan 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22017 Deploying Microsoft 365 Copilot: Our guide for leaders Generative AI has captured the world’s attention, and businesses are taking notice. According to our annual Microsoft Work Trends report, 70% of people would delegate as much work as possible to AI to lessen their workloads. Engage with our experts! Customers or Microsoft account team representatives from […]

The post Microsoft 365 Copilot for executives: Sharing our deployment and adoption journey at Microsoft appeared first on Inside Track Blog.

]]>
Deploying Microsoft 365 Copilot: Our guide for leaders

Generative AI has captured the world’s attention, and businesses are taking notice.

According to our annual Microsoft Work Trends report, 70% of people would delegate as much work as possible to AI to lessen their workloads.

Capitalizing on this trend will mean the difference between surging ahead or getting left behind, including here at Microsoft, where we were the first enterprise to fully deploy Microsoft 365 Copilot.

“I’m inspired by the transformative power of AI,” says Andrew Osten, general manager of Business Operations and Programs in Microsoft Digital, the company’s IT organization. “I’ve been impressed with how quickly our employees have put it to work for them.”

He would know. His team is responsible for driving usage and adoption of Copilot and any new features to more than 300,000 employees and vendors across the world.

A photo of Osten

“Customers are looking to us to share what we’ve learned as the first enterprise to deploy Copilot. Our team has a unique opportunity to help them deploy and get to value as quickly as possible.”

Our mission in Microsoft Digital is to empower, enable, and transform the company’s digital employee experience across devices, applications, and infrastructure. We provide a blueprint for our customers to follow as Customer Zero for the company, and as such, we’ve created this guide for deploying and adopting Microsoft 365 Copilot that’s based on our experience here at Microsoft.

“Customers are looking to us to share what we’ve learned as the first enterprise to deploy Copilot,” Osten says. “Our team has a unique opportunity to help them deploy and get to value as quickly as possible.”

Chapter 1: Getting your governance right

Before you even begin your Microsoft 365 Copilot implementation, you’ll want to consider how this tool impacts your data. Copilot uses Large Language Models (LLMs) that interact with data and content across your organization and uses information your employees can access to transform user prompts into personalized, relevant, and actionable responses.

Giving your employees this level of access means proper data hygiene is a priority. At Microsoft Digital, we use sensitivity labeling to empower our employees with access while also protecting our data. Microsoft 365 Copilot was designed to respect labels, permissions, and rights management service (RMS) protections that block content extraction on relevant file labels. That ensures private or confidential information stays that way.

This chapter outlines the highly robust, best-case scenario we created for Microsoft, but we know not every organization has a fully deployed data governance strategy. If you’re in that position, don’t worry! You can use Restricted SharePoint Search to provide instant value and protection without exposing Copilot to all of your internal SharePoint sites.

Laying the groundwork with proper labeling

We’ve developed four data labeling practices that make up our foundation for appropriate policies and settings.

Responsible self-service

Enable your employees to create new workspaces like SharePoint sites, ensuring your company data is on your Microsoft 365 tenant. That enables your people to take full advantage of Copilot in ways that align with your organizational data hygiene while you keep your company’s information safe.

Top-down defaults

Label containers for data segmentation by default to ensure your information isn’t overexposed. At Microsoft, we default our container labels to “Confidential\Internal Only.” We use Microsoft Purview to manage this process.

Consistency within containers

Derive file labels from their parent containers. Consistency boosts security and reduces the administrative burden on your employees for labeling every file they create. Copilot will reflect file labels in chat responses so employees know the level of confidentiality of each portion of AI-created responses.

Employee awareness

We train our employees to understand how to handle and label sensitive data. By making your employees active participants in your data hygiene strategy, you increase accuracy and improve your security posture.

Self-service with guardrails

The data hygiene practices above form a foundation for compliance and security, but backstopping those efforts through Microsoft 365 features adds an extra layer of protection. Here’s how:

Trust, but verify
Empower self-service with sensitivity labels, but verify by checking against data loss prevention standards, then use auto-labeling and quarantining when necessary. We’ve configured Microsoft Purview Data Loss Prevention to detect and control sensitive content automatically.

Expiry and attestation
Put strong lifecycle management protocols in place that require your employees to attest containers to keep them from expiring. We don’t keep items that don’t have an accountable employee or that might not be necessary for our work.

Controlling the flow
Limit oversharing at the source by enabling company-shareable links instead of forcing employees to grant access to large groups. To enforce these behaviors, you can set default link types based on labels through Purview.

Oversharing detection
Even under the best circumstances, accidents happen. When one of our employees does overshare sensitive data, we use Microsoft Graph Data Connect extraction in conjunction with Microsoft Purview to catch and report oversharing.

International compliance: No size fits all

Europe has extra requirements in the form of EU Data Boundary regulations and works councils, organizations that provide employee co-determination on workers’ rights or regulatory issues. Our Microsoft 365 Copilot deployment meant we needed to partner closely with our Microsoft works councils to address complex data and privacy implications.

Your experience will vary depending on your industry and where you operate, but we’ve learned that it’s best to work closely with local subsidiaries to ensure you have a complete picture of a region’s regulatory situation. Local insiders are poised to liaise with works councils or other bodies through direct relationships. Start the process early so you can manage feedback cycles effectively and resolve any concerns through configurations that work for your employees.

Learning from our governance, security, and compliance practices

Bring the right people into the conversation

Don’t keep this conversation in the IT sphere alone. Bring in all the relevant security, legal, and compliance professionals.

Build a foundation for automation

Microsoft Purview Data Loss Prevention has powerful intelligent detection, but it relies on establishing good defaults.

Think about how your employees will use Copilot

Determine the primary use cases. The kinds of collaboration and access employees need will affect your labeling architecture.

Take this opportunity to train employees

If you’ve been looking for an excuse to refresh employee knowledge around data privacy, let this moment be your milestone.

Don’t overwhelm your users

Make labeling easy and intuitive and ensure it isn’t overwhelming.
Employees should have a limited set of choices to keep things simple.

Key takeaways

Use these tips to tackle governance, security, and compliance at your company. It’s based on what we learned deploying Copilot internally here at Microsoft.

  • Establish a clear labeling framework that defines classification levels, maps labels to the right policies (such as access control, encryption, DLP, and storage rules), sets container defaults, and ensures employees understand how to apply labels correctly.
  • Implement comprehensive data loss prevention controls by configuring Microsoft Purview DLP standards and quarantines, defining lifecycle and attestation processes, and using Microsoft Graph Data Connect to identify and remediate oversharing.
  • Engage globally to meet international compliance needs by partnering with local subsidiaries and works councils, addressing regional requirements and concerns, and determining where segmented or region‑specific deployments are necessary.

Key actions

How we did it at Microsoft

Further guidance for you

Chapter 2: Implementation with intention

At the time of our deployment, we were the first company to roll out Microsoft 365 Copilot and agents at scale, and our implementation team had to choose from different licensing strategies. We’ve learned from experience that it makes sense to start with pilot groups who can validate the experience and enable the rest of your organization. For us, that looked like:

Scaling out your licenses

After you decide on the general shape of your rollout, you can begin building your licensing strategy. In Microsoft Digital, we started with individual licenses at the single-user level. As our implementation scaled, we tied licensing automation to Microsoft 365 groups to implement targeted licensing changes at scale. Those groups could include subsets of employees or entire organizations within Microsoft, and we keyed our automation logic to their expanding and contracting eligibility.

We highly recommend defining a phased rollout strategy and structuring your groups accordingly. That creates accountability and gives your IT admins a crucial point of contact for understanding the licensing needs of different groups within your organization.

There are three primary benefits to using groups:

Optimize licensing costs: Create groups that reflect your business needs and goals that align with your respective business sponsors. Sync your licensing status changes with your group membership changes. That way, you can assign the right licenses to the right users and adjust easily if you require frequent changes (e.g., in your early initial validation phase) and avoid paying for licenses you don’t need or use.

Refine admin costs: Group-based licensing enables your admins to assign one or more product licenses to a group. This depends on your rollout strategy and progress—your admins will be able to streamline your group setup at scale, reducing your admin overhead, which is helpful considering all the licenses you likely need to manage.

Enhance compliance and security: This ensures that only authorized users are licensed and have access to resources, enhancing your security and compliance. Your admins can use audit logs and other Microsoft Entra services to monitor and manage your group-based licensing activities.

Pre-adoption communications

Given the excitement around AI, one of the biggest challenges during our phased implementation was support requests from employees not within our initial pilot groups. Most of our support requests at this stage were essentially asking, “When do I get access?”

You can easily avoid the issue through clear and honest communication. For example, when you alert your initial implementation groups about their Copilot access, you could simultaneously deploy “Coming soon” emails to the rest of your organization. That will help you avoid any confusion while simultaneously generating excitement.

Your IT implementation team can’t work in isolation. Communication, especially with organizational leadership, is a key part of your licensing and implementation strategy.

Learning from our implementation

Design for the “who”

When you determine your initial cohorts, base your decisions on which roles have the largest coverage and will provide the most relevant feedback.

Get your groups in place

Be thoughtful about your Microsoft 365 groups and make sure everyone knows who owns them and who’s responsible.

Engage your support team from the start

This is a new technology, so your support teams will receive requests. Ensure they’re ready by giving them early access.

Manage expectations to minimize blowback

Proactively help users understand why they have licenses or don’t. Note that your rollout strategy might be subject to change.

Bring leadership on board early

Executive sponsorship isn’t just useful for adoption. Leaders will also help you identify the key use cases within their organizations.

Product feedback at every level

Encourage feedback for employees in your early implementation phases because that will guide your wider adoption efforts.

Key takeaways

Use these tips to help you with your internal implementation and admin process. They are based on our experience here at Microsoft.

  • Prepare your organization for Copilot by performing the Microsoft 365 Copilot optimization assessment, defining implementation phases and audience groups, securing leadership sponsorship, and mapping your rollout plan to a clear licensing strategy.
  • Onboard users and activate your environment by assembling the right security groups, building an automated licensing workflow, enabling roles for Copilot reports and dashboards, assigning and configuring licenses, and gathering early signals from pilot usage and feedback.
  • Drive engagement through targeted communication by analyzing in‑app and qualitative pilot feedback, reviewing usage data, and delivering clear, ongoing communications aligned with your adoption strategy.

Key actions

How we did it at Microsoft

Further guidance for you

Chapter 3: Driving adoption to accelerate value

The fact that your employees are excited about trying out Copilot isn’t enough. We found that you need strategic, coordinated change management to drive usage and adoption.

To do this effectively, you will need to empower change agents in your organization. These are not part-time roles; they are dedicated resources across your company who are responsible for the change management function, including creation of a deployment and adoption plan, facilitating principled change management practices, communicating and engaging with employees, preparing employee readiness and learning opportunities, and then measuring the success of your deployment across the enterprise. At a high level, your strategy should consist of the following five steps.

Microsoft 365 Copilot change management

Illustration showing five steps of change management: Planning, strategy, communications, readiness and training, and measurement.
Focusing on change management is key when you deploy Microsoft 365 Copilot.

How we drove adoption in Microsoft Digital

At Microsoft, we broke our company-wide adoption efforts into cohorts, for example, subsidiaries or business groups. Depending on the size of your enterprise, you may benefit from this approach as well. We divided our adoption along two vectors: internal organizations like legal or sales and marketing, and regions like North America or Europe. Different cohorts have different focuses, but the strategy is similar. At Microsoft, we did this in four phases:

Get ready

Effective change management requires careful planning. Begin by identifying and then working with company-wide change management leads. Next, identify members of your target cohorts who will support the adoption, including change managers, leadership sponsors, and employee champions.

Champions will be crucial to your adoption by filling several powerful roles:

  • Pinpointing key usage scenarios for Copilot based on their cohort’s culture or processes.
  • Providing insights that help adoption leaders build out their rollout plans.
  • Most importantly, demonstrating the value of Copilot and showing their peers how powerful this tool can be in their day-to-day work.

When champions socialize their tips and tricks, our experience at Microsoft Digital has revealed that it’s best to share specific prompts and the value they provided as a concrete entry point for users. For example, a champion could say, “I saved three hours drafting this sales script in Microsoft Word using this prompt,” then share their Copilot prompt as a place for peers to start.

Works councils also play a key role at this stage. They offer the benefit of local cultural expertise and can help you identify the challenges employees face in their jurisdiction. Even something as simple as understanding proper modes of address helps smooth the road to adoption through effective communication.

Each of these sets of stakeholders has a role to play in leading your own rollout. We recommend using Microsoft 365 Copilot adoption resources to build out your own adoption plan.

Onboard and engage

At Microsoft, we implemented this phase across each adoption cohort. Because every group will have its own champions and leadership sponsors, it’s important to treat each of them as its own organization, with its own unique adoption needs.

In advance of our general rollout, we created “jump-start” communications with links to learning opportunities:

Localized training took the form of Power Hours in different languages and time zones. These training sessions demonstrated key Copilot scenarios across Microsoft 365 apps.

Self-learn assets included user quick-start guides, demo videos, and Microsoft Viva Learning modules to accommodate different learning styles and preferences.

Pre-rollout communications fulfill two needs. First, this messaging is a great opportunity to launch your champion communities. Second, these communications build your employee population’s desire and excitement for their incoming Copilot licenses, then prepare them to hit the ground running when they get access.

After your Copilot licenses are live, your launch-day welcome comms are straightforward. Invite employees to access Copilot and to start experimenting with how it can fit into their work. There are many possible vectors for deploying these communications, but a multi-pronged effort that includes Microsoft Viva Amplify will deliver the maximum impact.

For support in building out your own communication plan, our adoption team has created a user onboarding kit for Copilot. These ready-to-send emails and community posts can help you onboard and engage your users.

Deliver impact

After everyone has access, it’s time to promote Copilot usage and ensure all employees are having the best possible experience and gaining the most value. For our cohorts, employee champions and leadership sponsors were essential levers.

It’s important to remember that Copilot isn’t just another tool. It introduces a whole new way of working within employees’ trusted apps. At Microsoft, we took great care to encourage employees to adapt a mindset to see it as part of their daily work—not just something they play with when there’s time.

Microsoft Viva Engage, or a similar employee communication platform, is a helpful forum for peer community support. In our case, it provided an organic space for champions to share their expertise and change managers to provide further recommendations and adoption content. For employees who explore best on their own, Copilot Lab provides in-the-flow learning opportunities to build their prompt skills.

Meanwhile, leadership sponsors diversified our communications strategy by deploying and amplifying messaging through executive channels like org-wide emails or Viva Engage Leadership Corner posts.

Extend and optimize

Understanding overall usage patterns and impact is crucial to optimizing usage. Our Microsoft Digital team used a combination of controlled feature rollout (CFR) technology while tracking usage through Microsoft 365 admin center and the Copilot Dashboard in Viva Insights. Together, these tools gave us the visibility and tracking we needed to establish and communicate adoption patterns.

Meanwhile, IT admins and user experience success managers can access simple in-app feedback through Microsoft 365 admin center. And to really maximize value, our Microsoft Digital employee experience teams conducted listening sessions and satisfaction surveys.

All these insights are helping us establish a virtuous cycle to drive further value and better adoption for future rollouts, extend usage to new and high-value scenarios, incorporate Copilot into business process transformation, and understand custom line-of-business opportunities.

Driving user enablement with Microsoft Viva

Our team in Microsoft Digital used Microsoft Viva to help enable our 300,000-plus global users. Microsoft Viva is an Employee Experience Platform that brings together communication and feedback, analytics, goals, and learning in one unified solution. Our team used Viva across a range of change management scenarios, including building awareness, communicating with our employees, providing access to readiness and learning resources, and measuring the impact of our deployment.

You can see a few of the specific ways we used Viva to accelerate employee adoption below.

Accelerating Microsoft 365 Copilot with Viva

Viva Connections

Sharing key news related to deployment and enablement, generating “buzz,” and tying Copilot to Microsoft culture.

Viva Amplify

Producing and efficiently distributing employee communications to build awareness and excitement.

Viva Learning

Courses and training for our employees on how to maximize value from Copilot, inclusive of building effective prompts.

Viva Engage

Actively engaging employees, providing leader updates, listening to feedback, and enabling Champs community.

Viva Insights

Using the Microsoft 365 Copilot Dashboard beta to identity actionable insights and usage trends.

Viva Pulse

Instant feedback from employees on their Copilot experience to fine-tune our landing and adoption approach.

Viva Glint

Understanding employee sentiment and gauging the overall effectiveness of our Copilot deployment effort.

Learning from our adoption of Copilot

Cascade adoption efforts through localization

Regional differences, priorities, even time zones—they can all block your centralization efforts. Your insider adoption leaders within each adoption cohort can help.

Empower your employee champions with trust

Monitor your user-led adoption communities at the start to provide support. As this community of power users becomes product experts, they’ll take over.

Empower employees as innovators

You’ll be surprised by what your employees dream up. Provide every opportunity for them to share their favorite tips and usage scenarios.

Create excitement, but set expectations

Encourage a healthy mindset around what Copilot can accomplish and where it fits. Don’t overpromise.

Gamify learning to build engagement and experience

Friendly competitions or cooperative challenges like prompt-a-thons generate excitement and invite creativity.

Understand that for many, AI is emotional

Overcome AI hesitancy by encouraging employees to tackle easy tasks with Copilot assistance. That will help minimize reluctance.

Use Microsoft Viva to accelerate time to value

Viva supports user enablement through learning, effective communication, usage tracking, and employee sentiment.

Key takeaways

Use these tips as your guide as you build out and implement your adoption plan. They are based on our own experience internally at Microsoft.

  • Prepare your organization for adoption by identifying your adoption lead, building a cross-functional cohort-based team, defining personas and key usage scenarios, establishing communication preferences and success metrics, completing enablement training, and creating a localized communications and asset library.
  • Engage your cohorts and activate readiness by deploying targeted onboarding communications, launching champion communities, running live and self-paced learning experiences, and elevating visibility with digital materials that help employees understand how Copilot improves their daily work.
  • Drive measurable impact across cohorts by promoting usage through internal channels, reporting on KPIs at planned intervals, gathering employee sentiment through surveys and listening sessions, spotlighting success stories, applying learnings to refine adoption activities, and nurturing champions through deeper technical training.
  • Extend and optimize your deployment by exploring new high‑value scenarios, identifying opportunities for business process transformation with agents, Copilot Studio, plugins, and connectors, and sourcing custom line‑of‑business use cases that advance your organization’s Copilot maturity.

Key actions

How we did it at Microsoft

Further guidance for you

Chapter 4: Building a foundation for support

Empowering employees means making sure they have access to the right support channels. The fact that Copilot operates across a wide spectrum of Microsoft 365 apps adds complexity to support scenarios. As a result, it’s important to get your support teams early access along with your earliest pilot implementations.

For us in Microsoft Digital, four principles define high-quality support:

Strategizing for support

Building experience and knowledge is one thing, but coming up with your approach to support requires planning and a strong idea of your users’ ideal experience. At Microsoft Digital, we take a “shift-left” approach. That means we save our human support staff time by attempting to create excellent self-service options for our users.

Shift-left principles can apply to many different support contexts, but with Copilot, we’ve found that the most important upfront action is ensuring your employees have accessible self-service support channels and communicating their availability. Work with your adoption teams to ensure they include self-service support options in their rollout communications.

Seven things we learned prepping to support Microsoft 365 Copilot

Preliminary access

Select your initial support specialists. Include people with different Microsoft 365 app focuses, support tiers, and service audiences.

Communication hub

Establish a community space where your support team can connect and collaborate on issues. Invite non-support professionals as needed.

Knowledge base

Start a collaborative document and add learnings. This will eventually evolve into your knowledge base for internal support.

Widen access

Host information sessions with the wider support team and extend access so all relevant support professionals can ramp up.

Rehearse

Conduct role-playing and shadowing sessions so support teams can build practical knowledge and confidence.

Support go-live

Get your support resources and processes ready and push them live in advance of your Copilot deployment. Consider a dry run.

Track

Determine a tracking cadence and gather data on Copilot issues that arise so support teams can identify trending issues and tickets.

Common questions, issues, and resolutions

We’re getting questions about why particular employees don’t have licenses.

Use employee change management communication waves to solve for this issue by alerting employees when they’ll have access to licenses.

Users are coming to us with questions that would be better served by adoption and employee material, and that isn’t our role as support.

Work with your adoption team to preempt these issues with proactive communications. Update your self-help content and provide your support agents with ready access to different employee education resources.

Teams are looking for integration support. Where do I send them?

Share this list of pre-built connectors to help your users integrate various data sources to Microsoft Graph. This list shares the types of content supported.

Can employees put confidential information into Copilot?

If employees are signed into Copilot with their Entra ID, they can enter confidential information.

My organization has concerns about who owns the IP that Copilot generates. Does the Microsoft Customer Copyright Commitment apply to Copilot?

Microsoft does not own the IP generated by Copilot. Our universal terms state “Microsoft does not own customers’ output content.”

What’s the best way to verify the accuracy of the information Copilot provides?

Copilot is transparent about where it sources responses. It provides linked citations to these answers so the user can verify further.

Key takeaways

Use these tips to manage your Copilot support efforts. They are based on our experience here at Microsoft.

  • Enable and align your support team by starting with a core group of support leaders, establishing shared communication spaces and a collaborative knowledge base, expanding access to the full Copilot support team, training them through information sessions and role‑playing exercises, defining escalation paths, and partnering with internal communications to finalize user‑facing support materials.
  • Deliver meaningful user impact by signaling support availability across employee communities, publishing a clear and accessible user-facing knowledge base, and standing up self-service automations where appropriate to empower users and reduce friction.
  • Optimize and mature your support services by reviewing ongoing support issues and product feedback, and continually refining support workflows to drive efficiency, accuracy, and a better user experience.

Key actions

How we did it at Microsoft

Further guidance for you

Chapter 5: Extending Copilot through agents

As organizations and employees have matured with respect to AI, agentic extensibility is expanding the frontiers of this technology. By using and even creating agents that surface knowledge, take actions, and reinvent workflows, employees can personalize AI’s capabilities to fulfill more specific needs.

What is an agent?

Agents are specialized AI-powered assistants that automate and execute business processes, working alongside or on behalf of a person, team, or organization. They range from simple prompt-and-response agents to more advanced, fully autonomous agents. Through specific instructions, grounding, connectors, APIs, and custom orchestration, creators can tailor agents to more focused workflows than a comprehensive AI solution like Microsoft 365 Copilot.

At Microsoft, our goal has been to provide access and enable agents at appropriate levels for our employees and the company as a whole. To make that happen, we’ve adopted a maturity model for agentic AI deployment. Early phases focus on using Copilot, grounded in enterprise data, to enhance knowledge discovery and retrieval. Later phases will enable our employees to act on that knowledge and even fully automate business workflows.

Agentic AI at Microsoft

Agentic AI agent types: retrieval, action, and automation.
Our levels of agentic capability.

Each of these levels of agentic capability requires different tools to create and depends on different policies to govern. Because retrieval agents don’t require special tooling, we allow employees to create them at will through Copilot Chat and simplified agent builders in Copilot Studio and SharePoint.

For more complex agents intended to meet enterprise needs across lines of business or the company as a whole, our developers use more full-featured tools like Copilot Studio or Azure AI Foundry. For these kinds of agents, we apply the same rigor, reviews, and software development lifecycle (SDL) we use as part of our standard internal app development.

As you explore the different kinds of agents available to your users and decide how and where to enable them, adoption.microsoft.com provides an excellent place to start. It provides three different approaches to creating agents: Microsoft 365 Copilot Chat, Azure AI Foundry, and Copilot Studio.

All of this choice adds complexity, so maintaining visibility and control over the agents your employees create can be a challenge. As a result, we take a matrixed approach to creating and governing agents based on different parameters. They include the type of agent, how the user creates it, its knowledge sources, the need for custom tooling, sharing and publishing permissions, and more.

Keeping agents safe and effective through good governance

At Microsoft, we incorporated elements of our tenant’s minimum bar for governance into our policies for managing agents. These measures include Microsoft Information Protection, a functional inventory, activity logging, lifecycle management, and the ability to properly isolate agents against crossing data boundaries.

To govern agentic capabilities, we introduced further controls like sharing limits, breadth of knowledge sources, agent metadata, and information about an agent’s behaviors. The result is a proactive approach to governance backstopped by reactive structures that catch any issues.

As you think about governing your own agents, consider the four core principles we’ve established at Microsoft Digital.

We empower employees to create and share simple, low-risk agents

 We provide a safe space and personal flexibility that allows individual employees to experiment without implicating company data or content users don’t own.

We capture and vet sensitive data flows at the enterprise level 

More complex or far-reaching agents owned by teams or lines of business need enterprise documentation to account for external audits or security and privacy validation.

We protect data designated confidential or higher 

We contain data flows to tenant mandates and only trust suitable storage destinations for content.

We honor the enterprise lifecycle 

We treat agents that individual employees own like any other user-created app and delete them when that individual leaves the organization. Agents owned by teams have a lifecycle defined by the tenant and tied to attestation, the SDL, and accountability confirmations.

Once you have your governance policies and procedures in place, you can begin your rollout to users through many of the same strategies and processes we’ve discussed in this guide.

Learning from our experience with agents

Connect with relevant stakeholders

Establish early communication and collaboration with members of your security, legal, compliance, IT, and other teams who can help you define ways to configure Copilot Studio agent builder safely.

Trust and empower

Provide safe spaces with appropriate guardrails for individual employees to experiment with simple agents. Copilot Studio agent builder is a great place to start.

Expand enterprise capabilities

Empower a small number of trusted creators to experiment with more powerful agent-building tools under the close watch of IT, Governance, Security, Privacy, Data, and HR teams. This will reveal gaps in process and policy and inform future reviews.

Solidify labeling and data

Revisit your labeling structures and data flows. It will be important to have these structures in place to support this new agentic environment. Start by learning from our experience governing Copilot at Microsoft.

Extend your review process

Adapt any review processes you already have in place to agents, including security, privacy, and accessibility. Embed those reviews into your publishing workflow for agents operating above the individual level. Consider adding reviews for Responsible AI.

Prevent agent sprawl

Establish a reasonable enterprise lifecycle for agents that includes attestation. That will keep agents from sprawling or remaining in place after employees have left your organization or simply no longer need a particular agent.

Key takeaways

Use these tips to manage your Copilot support efforts. They are based on our experience here at Microsoft.

  • Plan and refine your governance approach by aligning with Security, Legal, Compliance, HR, and IT; updating existing governance and labeling policies for agents; defining your review process; building a matrix that maps agent capabilities to governance controls; and determining how your SDL procedures apply to agents.
  • Pilot with targeted teams to validate your controls by selecting groups such as Security, HR, and IT; establishing clear feedback and monitoring channels; and iterating on your review and remediation procedures based on insights from early adopters.
  • Enable agents responsibly across the organization by ensuring foundational protections like Purview DLP and Microsoft Information Protection are in place, deploying adoption and change‑management communications, enabling simple agent‑builder capabilities for broad users, and unlocking advanced agent development scenarios for IT and line‑of‑business developers.

Key actions

How we did it at Microsoft

Further guidance for you

Applying our lessons to your own Copilot deployment

Embarking on your Microsoft 365 Copilot deployment and agentic extensibility journey might seem daunting, but by capitalizing on the lessons that Microsoft Digital has learned from our internal deployment, you can both speed up the process and avoid any pitfalls.

A photo of Kerametlian.

“Deploying Copilot internally has inspired us to dive deeper into the power of AI assistance, which is enabling us to enhance our employee experience.”

By anchoring your work in careful planning and making use of the steps and resources provided in this guide, you can unleash a new era of productivity through Copilot.

We’ve learned a lot on our journey with Copilot, and we’re happy that we get to share our experiences with you—hopefully they help you on your journey.

“Deploying Copilot internally has inspired us to dive deeper into the power of AI assistance, which is enabling us to enhance our employee experience,” says Stephan Kerametlian, a business program management senior director in Microsoft Digital.

You’re not in this alone. If you’re looking for support or knowledge on any aspect of your deployment, reach out to our customer success team.

Key takeaways

This guide reflects our learnings and the processes we followed during our internal rollout of Microsoft 365 Copilot. This last set of tips summarizes the major actions you can take to get started with Copilot at your company. 

  • Start with strong governance: Build a clear labeling and data protection strategy before deploying Copilot to safeguard sensitive information and meet compliance needs.
  • Pilot, then scale: Roll out Copilot in phases, beginning with pilot groups to gather feedback and refine your approach before expanding companywide.
  • Communicate early and often: Proactive communication and leadership sponsorship are essential for managing expectations and driving successful adoption.
  • Empower champions: Identify and enable employee champions to share best practices, tips, and real-world scenarios that help others get value from Copilot.
  • Invest in training: Provide tailored learning resources and support to help users build confidence and skills with Copilot in their daily workflows.
  • Measure and optimize: Track usage, collect feedback, and continuously refine your deployment to maximize impact and uncover new opportunities.
  • Plan for support: Set up self-service and human support channels early so employees can get help quickly and keep momentum going.
  • Extend with agents: As your organization matures, explore agentic AI to automate workflows and unlock even greater productivity gains.

Key actions

How we did it at Microsoft

Further guidance for you

Try it out

We’d like to hear from you!

The post Microsoft 365 Copilot for executives: Sharing our deployment and adoption journey at Microsoft appeared first on Inside Track Blog.

]]>
22017
Powering our Microsoft 365 Copilot adoption with gamification http://approjects.co.za/?big=insidetrack/blog/powering-our-microsoft-365-copilot-adoption-with-gamification/ Thu, 18 Dec 2025 17:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=21489 When it comes to powering Microsoft 365 Copilot adoption rates internally here at Microsoft, it’s game on.  Literally. We were the first enterprise to fully deploy Copilot in 2024, and now, not two years later, our use of the company’s signature generative AI product is maturing. Engage with our experts! Customers or Microsoft account team […]

The post Powering our Microsoft 365 Copilot adoption with gamification appeared first on Inside Track Blog.

]]>
When it comes to powering Microsoft 365 Copilot adoption rates internally here at Microsoft, it’s game on. 

Literally.

We were the first enterprise to fully deploy Copilot in 2024, and now, not two years later, our use of the company’s signature generative AI product is maturing.

That doesn’t mean we’re getting serious—it means we’re having fun!

“Gamification is proving to be one of the most powerful ways to drive the more refined, higher-level use of Copilot that we’re looking for,” says Stephan Kerametlian, a business program management senior director within Microsoft Digital, the company’s IT organization. “When it comes to getting our employees to find more sophisticated and creative ways to use Copilot, we’re finding that having fun is one our biggest differentiators.”

It all started when we took our employees camping—we didn’t really take them into the woods, but we did so in spirit. 

A photo of Kneip.

“We discovered that introducing a layer of fun transforms Microsoft 365 Copilot training from a routine task into an entertaining learning experience.”

Cadie Kneip, readiness business program manager, Microsoft Digital

We organized ‘Camp Copilot’ to bring our employees together in a fun way so we could show them how they could add Copilot to their daily workflows.

“We did things like have a superhero prompt where you got to show everyone your superpowers, you could create your own Camp Copilot pin, and we even had a scavenger hunt where you could win cool prizes,” says Cadie Kneip, a readiness business program manager with Microsoft Digital and the creative force behind many of our employee engagement-based Copilot adoption efforts.  

It was a lot of fun, and it worked—Copilot usage by attendees spiked afterwards.

We discovered that introducing a layer of fun transforms Copilot training from a routine task into an entertaining learning experience,” Kneip says.

Under the guidance of Kneip—our CEO of fun—and others on our team, we created a companywide Copilot Expo, where we all came together to learn more about how to get more out of Copilot (where a reasonable amount of fun was had).

A picture of Bliefernicht.

“Working efficiently and consistently with Copilot and AI requires ongoing learning, especially as capabilities are continuously evolving. Gamification offers an excellent way to keep colleagues engaged—helping them learn effortlessly while having fun.”

Kirsten Bliefernicht, senior business program manager, Microsoft Digital

This three-week immersive program offered 80 role-based learning sessions to fast-track Microsoft 365 Copilot adoption. We made gamification a major theme, which made mastering Copilot feel less like work and more like play.

This time, the uptick in adoption and user satisfaction that followed was companywide.

“Working efficiently and consistently with Copilot and AI requires ongoing learning, especially as capabilities are continuously evolving,” says Kirsten Bliefernicht, a senior business program manager in Microsoft Digital. “Gamification offers an excellent way to keep colleagues engaged—helping them learn effortlessly while having fun.”

Gamification for locked-in learning

Copilot Expo attendee Ramita Singh experienced the transformative effect of gamification. A senior program manager within the Microsoft Datacenter Supply Strategy and Planning team, she’s also a Copilot Champion and a regular Copilot user.

A photo of Singh.

“The sessions and fun activities, like building my own avatar, inspired me. Since then, I ramped up my Copilot use and my productivity has skyrocketed.”

Ramita Singh, senior program manager, Datacenter Supply Strategy and Planning

Copilot Champions are early adopters and AI enthusiasts who help Microsoft peers learn and use AI tools like Microsoft 365 Copilot. For her part, Singh joined the Copilot Champs program to gain efficiency and become more productive.

Even as a regular Copilot user, Singh described her use as limited—until Copilot Expo.

“The sessions and fun activities, like building my own avatar, inspired me,” she says. “Since then, I ramped up my Copilot use and my productivity has skyrocketed.”

Levelling up fun supports engagement

Although the term ”gamification” is relatively new, the practice of including game-design elements in training to increase engagement and motivation and reward behavior has been around for centuries.

Research has shown that incorporating fun into training leads to significant gains in engagement and productivity.

A photo of Takayama.

“Seeing Copilot create an image or solve a task is exciting. That can motivate someone to learn more about AI.”

Kaz Takayama, business program manager, Microsoft Digital

Using gamification at Copilot Expo redefined the way people learn. We designed games and reward-based challenges. We awarded points and badges to build interest and increase Copilot use at Microsoft. We created a leaderboard to show the standings and add a competitive edge to the learning.

Participants were represented on the leaderboard by the avatars they created during one of the games. This anonymity meant that competitors only recognized their own avatars and position on the leaderboard.

“People are getting tired of mandatory trainings,” says Kaz Takayama, a business program manager within Microsoft Digital in Japan. “Seeing Copilot create an image or solve a task is exciting. That can motivate someone to learn more about AI.”

Copilot Expo featured diverse content designed to appeal to a variety of people, roles, and workflows. Learning sessions were capped at 30 minutes. Sessions were scheduled at accessible times and targeted to specific roles.

A photo of Bu.

“The result is interactive—you see visuals, you hear music, and the creativity surprises you.”

Ju Bu, business program manager, Microsoft Digital

That personalization allowed engineers, communicators, and salespeople, for example, to learn role-specific uses for Copilot, making the learning even more effective.

During Copilot Expo, games were scheduled prior to training sessions to build engagement. Additional gamified activities followed training to reinforce key concepts and encourage the application of Copilot. Attendees earned points for every Copilot-related task they completed.

“We picked these activities because they only require prompts, and people can practice prompts every day,” says Ju Bu, business program manager for Microsoft Digital in Greater China. “The result is interactive—you see visuals, you hear music, and the creativity surprises you.”

From prompts to play: Gamified activities at Copilot Expo

Activity 1: Practice crafting prompts in Copilot to generate polished images and avatars.

Use the following tips:

  • Use clear, descriptive language in prompts.
  • Specify style, mood, or format (for example, “cartoon avatar” or “introspective and cinematic” versus something vague, like “make something moody”).
  • Experiment with variations to compare and refine results.
  • Keep prompts concise but detailed enough.

Activity 2: Use a third-party service built on Azure technology to create songs in any chosen musical genre.

Follow these tips:

  • Select a genre that matches the mood you want.
  • Provide clear input (lyrics, themes, or tone).
  • Adjust tempo and instrumentation for variety.
  • Share outputs for group feedback and fun.

Activity 3:  Use Copilot to build quick quizzes that reinforce new information.

One popular format is “Two truths and a lie.” Here are the guidelines:

  • Keep statements short and focused on Copilot features.
  • Mix one false statement with two accurate ones.
  • Use real examples to strengthen recall.
  • Encourage discussion after revealing the correct answers.

How it worked: Organizers asked Copilot to create a “Two truths and a lie” quiz by setting parameters (topic = Copilot functionality, number of statements = 3, difficulty = easy). Copilot produced the statements, and participants guessed which were true and which was false. For example:

  • Copilot can generate meeting minutes → True
  • Copilot can change the meeting organizer → False. Copilot cannot alter calendar details like who the organizer is.
  • Copilot can provide real-time transcription → True

The power of friendly competition

According to Tomás Rogeiro Brochado de Miranda, a cloud solution architect at Microsoft based out of Portugal, adding an element of competition is a key ingredient for learning.

“Everyone likes a challenge,” he says. “You might hear someone say they don’t like games, but you’ll never hear someone say, I love to lose.”

Singh agrees.

“For a lot of people, learning feels forced when it’s required,” she says. “But when you add a bit of fun, like a competition, it generates more interest in learning new concepts.”

As organizations race to keep pace with AI, Kerametlian reminds us that learning paths and transformation aren’t one-size-fits-all.

“People learn and grow in different ways,” he says. “Gamification is one of the few powerful tools that other organizations should consider leveraging to maximize productivity and the value they get from Copilot.”

Research shows that gamification not only reinforces habit-building but also boosts positive sentiment about a product—two critical factors for driving Copilot adoption.

We would like our people to use and reuse Copilot, and gamification is helping us make that happen.

We’re also creating a fresh experience for those who’ve stepped away from Copilot. As Kneip puts it, “If someone has a bad AI experience, they won’t return—unless they see a peer succeed.”

When respected colleagues share their wins, it sparks curiosity and people give Copilot another shot.

“After they use AI in ways that matter to them, they often become champions,” Kneip says. “We see that type of turnaround every day.”

Lasting impacts

Months after Copilot Expo wrapped, the momentum hasn’t faded. Many participants are still completing Copilot-related tasks and logging points on the leaderboard—proof that competition continues to fuel engagement.

Copilot adoption at Microsoft has surged, and positive sentiment has increased.

“That’s the opportunity our customers have with AI adoption,” Kneip says. “If you give your organization something that’s relevant, peer driven, and real, they’re going to have a much better experience.”

Post-event, Microsoft engineers have turned the Copilot Expo leaderboard into a template that can be adapted and used by internal teams for their own gamified activities.

A photo of Kerametlian.

“First you need to give people access to Copilot, and then it’s about robust change management complemented by gamification, which significantly accelerates adoption and value.”

Stephan Kerametlian, business program management senior director, Microsoft Digital

Gamification activities continue building excitement around Copilot and AI and what’s possible.

The last two years at Microsoft Digital have been about increasing Microsoft 365 Copilotuser engagement, adoption, productivity, and value.

“When it comes to enabling AI transformation, engaging your employees is everything,” Kerametlian says. “First you need to give people access to Copilot, and then it’s about robust change management complemented by gamification, which significantly accelerates adoption and value. The result? Usage grows, enthusiasm soars, and productivity follows.”

Key takeaways

Here are some tips for using gamification to energize AI adoption at your organization:

  • Drive lasting engagement: Gamified activities ignite fun, excitement, and learning that continues well beyond an event.
  • Offer experiences: Creative training methods like friendly competition and interactive workshops significantly enhance employee engagement. Engaged employees are more likely to embrace AI tools.
  • Foster innovation: Encouraging creative thinking empowers employees to explore AI applications, enhancing their problem-solving capabilities and increasing their productivity.
  • Build trust and skills: Peer-led training leverages existing knowledge within teams, making it easier for employees to learn from each other about AI tools.
  • Encourage experimentation: A risk-free environment allows employees to experiment with AI tools without fear of failure, which is vital for discovering practical applications.

Try it out

The post Powering our Microsoft 365 Copilot adoption with gamification appeared first on Inside Track Blog.

]]>
21489
Deploying Microsoft Agent 365: How we’re extending our infrastructure to manage agents at Microsoft http://approjects.co.za/?big=insidetrack/blog/deploying-microsoft-agent-365-how-were-extending-our-infrastructure-to-manage-agents-at-microsoft/ Fri, 21 Nov 2025 16:34:47 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=21220 The number and sophistication of agents that our employees are building here at Microsoft is growing rapidly. To help us and all enterprises respond to this new opportunity, the company just announced Microsoft Agent 365 at Microsoft Ignite. This product serves as the control plane for AI agents—a new evolution of the existing systems that […]

The post Deploying Microsoft Agent 365: How we’re extending our infrastructure to manage agents at Microsoft appeared first on Inside Track Blog.

]]>
The number and sophistication of agents that our employees are building here at Microsoft is growing rapidly.

To help us and all enterprises respond to this new opportunity, the company just announced Microsoft Agent 365 at Microsoft Ignite. This product serves as the control plane for AI agents—a new evolution of the existing systems that organizations like ours use to manage people and apps.

A photo of Johnson.

“We’re empowering our employees and teams to build agents with guardrails. We have governance structures in place to ensure our internal agents are useful, safe, and properly scoped.”

David Johnson, principal program manager architect, Microsoft Digital

Our team—Microsoft Digital, the company’s IT organization—is now using Agent 365 to track agents that employees and teams from across the company are building and deploying. We’re also using it to access the dashboard that allow us to manage and govern agents companywide. We plan to utilize the new platform to comprehensively manage our agent workload.

Agent 365 will enable Microsoft Digital to help our employees, teams, and organizations to build and deploy agents safely and effectively, according to David Johnson, principal program manager architect for governance for the organization.

“We’re empowering our employees and teams to build agents with guardrails,” says Johnson, who notes that we have more than 100,000 agents on the Microsoft tenant today. “We have governance structures in place to ensure our internal agents are useful, safe, and properly scoped.”

Agent 365 is the control plane for AI agents and will play a key role in accelerating our journey toward becoming an AI-powered Frontier Firm. Whether your agents are created with Microsoft platforms, open-source frameworks, or third-party tools, Agent 365 helps you deploy, organize, and govern them securely.

“Agent 365 delivers unified observability across your entire agent fleet through telemetry, dashboards, and alerts,” says Charles Lamanna, president of Business Apps & Agents for Microsoft. “IT leaders can track every agent being used, built, or brought into the organization, eliminating blind spots and reducing risk.”

Here in Microsoft Digital, we’re planning to use Agent 365 for multiple purposes, including:

  • Filtering our agent inventory on specific criteria, such as the type of agent or how it was built
  • Enhancing governance-specific actions we can take with agents in areas like ownership and quarantining
  • Gaining visibility into trends like agent usage
  • Ingesting agent blueprints and defining policy templates

If you are unfamiliar with an agent blueprint, it’s a portable specification for an AI agent’s identity, capabilities, constraints, data access, and lifecycle.

Agent 365 is part of our Frontier Firm organizational blueprint, which we’re using to blend machine intelligence with human judgment to create agents that are AI-operated but human-led.

Boosting governance with Agent 365

Agent 365 maximizes the value of agents while minimizing tenant risk. These are capabilities that play well with the data governance foundation that we’ve already laid here in Microsoft Digital, in which we use data sensitivity labels and data loss prevention controls to govern the data that agents use in our environment.

We incorporated elements of our tenant’s minimum bar for governance into how we secure agents. Those include Microsoft Purview Information Protection, a functional inventory, activity logging, lifecycle management, and the ability to properly isolate agents against crossing data boundaries.

Our intention is always to act as proactively as possible while putting reactive structures in place to catch any issues that arise. After all, this is a new technology, and there are bound to be some surprises. By combining all of these elements, we’ve landed on six core principles for governing agents:

  1. We built a data hygiene foundation: This enables you to trust your data estates with which employees build and use agents.
  2. We empower employees to create and share simple, low-risk agents: We provide a safe space and personal flexibility that allows individual employees to experiment, without implicating company data or content that users don’t own.
  3. We capture and vet sensitive data flows at the enterprise level: More complex or far-reaching agents owned by teams or lines of business need enterprise documentation to account for external audits or security and privacy validation.
  4. We protect data designated confidential or higher: We contain data flows to tenant mandates and only trust suitable storage destinations for content. This depends on the ability to gate which connectors can work with which particular source data and sensitivity labels.
  5. We enable internal teams and organizations with a smooth path to develop agents: This provides them with all of the services and sources they need along a path to release to the company.
  6. We honor the enterprise lifecycle: Both user-based and attestation-based lifecycles come into play. We treat agents that individual users own like any other user app, and delete them when the employee leaves the organization. Agents owned by teams have a lifecycle defined by the tenant and tied to attestation, the software development lifecycle, and accountability confirmations.
A photo of Lamanna.

“We want and need feedback from our own IT team. It will help ensure all our customers are able to move quickly to deploy the platform with speed and safety.”

Charles Lamanna, president, Business Apps & Agents

Customer Zero for Agent 365

In our role as Customer Zero for Microsoft, our team in Microsoft Digital shares our insights on Agent 365 and our suite of agentic AI products with Lamanna and the product team. This makes the products more effective for our customers.

“We want and need feedback from our own IT team,” Lamanna says. “It will help ensure all our customers are able to move quickly to deploy the platform with speed and safety.”

While it’s still early days for Agent 365, the potential for transformative impact is significant.

“I meet with many of our top enterprise customers, and some of their primary questions are around how Microsoft manages agents to prevent sprawl, allows agent enablement against company data, and governs those agents,” Johnson says. “Agent 365 gives us a powerful new tool to manage our agentic estate, ensuring that our agents are delivering the transformative impact we expect while also enabling us to manage and secure our environment more effectively. Enabling self-service agent creation at scale necessitates enterprise observability and governance.” 

We’re excited to share more about our Customer Zero journey with Agent 365 on Inside Track soon.

Key takeaways

Here are five ways you can use Agent 365 to unlock agent observability and management at your company:

  • Registry: Get the complete view of all agents in your organization, including agents with agent ID, agents you register yourself, and shadow agents.
  • Access control: Bring agents under management and limit their access to only the resources they need. Prevent agents from being compromised with risk-based conditional access policies.
  • Visualization: Explore connections between agents, people, and data, and monitor agent behavior and performance in real time to assess their impact on your organization.
  • Interoperability: Equip any agent with apps and data to simplify human-agent workflows. Connect them to Work IQ to provide context for the work to onboard them into business processes.
  • Security: Protect agents from threats and vulnerabilities, and detect, investigate, and remediate attacks that target agents. Protect data that agents create and use from oversharing, leaks, and risky agent behavior.  

The post Deploying Microsoft Agent 365: How we’re extending our infrastructure to manage agents at Microsoft appeared first on Inside Track Blog.

]]>
21220
Hardening our digital defenses with Microsoft Baseline Security Mode http://approjects.co.za/?big=insidetrack/blog/hardening-our-digital-defenses-with-microsoft-baseline-security-mode/ Tue, 18 Nov 2025 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20811 Security isn’t just a feature—it’s a foundation. As threats grow more varied, widespread, and sophisticated, enterprises need to rethink how they protect their environments. That’s why we, in Microsoft Digital, the company’s IT organization, took a necessary step forward and deployed Microsoft Baseline Security Mode internally across the company. Engage with our experts! Customers or […]

The post Hardening our digital defenses with Microsoft Baseline Security Mode appeared first on Inside Track Blog.

]]>
Security isn’t just a feature—it’s a foundation.

As threats grow more varied, widespread, and sophisticated, enterprises need to rethink how they protect their environments. That’s why we, in Microsoft Digital, the company’s IT organization, took a necessary step forward and deployed Microsoft Baseline Security Mode internally across the company.

Baseline Security Mode is a new approach to endpoint protection that enforces secure-by-default configurations across our enterprise. And it’s not just about locking things down—it’s about doing so in a way that’s scalable, manageable, and respectful of user experience.

This is a story for every organization trying to balance usability with security. Baseline Security Mode is designed to help IT teams enforce protections without breaking productivity. It’s a shift toward proactive defense with standardized secure settings.

Understanding the need for Microsoft Baseline Security Mode

Security must evolve with the environment.

At Microsoft Digital, we’ve built a strong foundation of endpoint protection over the years. But as our ecosystem expanded—more devices, more workloads, more diverse user needs—we saw an opportunity to take our security posture to the next level.

Our existing configurations were effective, but they reflected the natural complexity of a large enterprise. Different teams had different requirements. Some relied on legacy technologies that had served them well. Others needed flexibility to support specialized workflows. Over time, this led to variation in how security policies were applied.

We wanted to unify that approach.

Baseline Security Mode emerged as a way to streamline and strengthen our defenses. It was about building on what worked. We started by identifying areas where legacy protocols and configurations could be modernized. That included technologies like ActiveX controls and older authentication flows, which we carefully evaluated and phased out where appropriate.

We also improved how we gather and use telemetry. Initially, we had limited visibility into how certain features were used. That made it harder to predict the impact of changes. So, we ran pilots, collected feedback, and refined our approach. Baseline Security Mode was a game changer here, providing built-in reports that gave us the visibility we needed to observe the impact of applying settings in our environment. For example, when we reviewed blocking legacy file formats, we discovered that some workflows depended on them. We responded quickly, offering alternatives and guiding users through the transition.

Ease of use was a priority.

We built intuitive controls into the Microsoft 365 admin center, allowing IT admins to manage policies with just a few clicks. No more manual scripts. No more guesswork. We also introduced exception handling to support specialized needs, ensuring that security didn’t come at the cost of productivity.

We worked closely with internal stakeholders, including compliance teams and work councils, to validate every step and build trust. We made sure the experience was smooth, the tools were reliable, and the changes were clearly communicated.

This wasn’t just a technical upgrade—it was a cultural shift.

Baseline Security Mode gave us a way to unify our security posture while honoring the diversity of our environment. It’s a smarter, more scalable way to protect our endpoints, and it reflects everything we’ve learned from years of experience.

Putting consistent security configuration into practice

Baseline Security Mode establishes a new standard, enabling organizations to be secure by default.

It is the result of a collaborative effort of multiple product teams at Microsoft, building on their security and incident-handling expertise.  It’s designed to simplify and strengthen endpoint protection across Windows and Microsoft 365. The feature lives in the Microsoft 365 admin center, where IT admins can enforce modern security policies with just a few clicks.

“When we blocked certain file formats, users were confused by the error messages and thought they were blocked from saving the file. So, we ran pilots, gathered feedback, and helped the product team build an improved error experience to save blocked formats to safe, newer formats.”

Harshitha Digumarthi, senior product manager, Microsoft Digital

The product teams delivered 20 features across five workloads: Office, OneDrive and SharePoint, Teams, Substrate, and Identity. Each one targets a specific risk—blocking legacy authentication, disabling insecure protocols, restricting ActiveX, and more.

When we deployed Baseline Security Mode as Customer Zero at Microsoft Digital, our job was to validate these features and controls in real-world enterprise conditions.

We pushed for exception handling.

Some users still relied on legacy formats or protocols. Certain teams, for example, needed access to older Office features. So, we worked with the product team to ensure exceptions could be built into the UI.

That flexibility was key. We knew from experience that without it, customers might hesitate to adopt the feature.

“When we blocked certain file formats, users were confused by the error messages and thought they were blocked from saving the file,” says Harshitha Digumarthi, a senior product manager at Microsoft Digital. “So, we ran pilots, gathered feedback, and helped the product team build an improved error experience to save blocked formats to safe, newer formats.”

We also pushed for better telemetry.

A photo of Gonis.

“When we heard about Baseline Security Mode, it was still in ideation. There were no tools in the Microsoft 365 admin center yet. We had to figure out how to enable this internally while the product team built the capabilities in parallel.”

Markus Gonis, senior service engineer, Microsoft Digital

At first, we had only a few days of data. That wasn’t enough to understand how features were used or what impact they would have. So we worked with the product team to expand telemetry, improve error reporting, and reduce false positives, including identifying bugs that skewed metrics and made troubleshooting harder.

We ran the deployment through our Tenant Trust Program and work council reviews to ensure global compliance. That gave us—and our customers—confidence.

Baseline Security Mode isn’t just a feature. It’s a shift in how we think about security, and we’re proud to have helped shape it.

Deploying Baseline Security Mode at Microsoft Digital

Rolling out Baseline Security Mode wasn’t just a technical exercise—it was a cross-team effort that demanded precision, patience, and partnership.

Microsoft Digital took the lead on deployment. We acted as Customer Zero, testing every feature in real-world conditions before it reached customers. That meant working closely with the product team to validate functionality, identify bugs, and shape the user experience.

“When we heard about Baseline Security Mode, it was still in ideation,” Gonis says. “There were no tools in the Microsoft 365 admin center yet. We had to figure out how to enable this internally while the product team built the capabilities in parallel.”

Telemetry was limited. We had only 30 days of data to work with. That made it hard to predict how changes would affect users, so we ran pilots with internal user acceptance testing cohorts and we deployed in phases.

Philpott appears in a photo.

“It was a great Customer Zero experience. Our security teams stood to benefit from Baseline Security Mode features, and we helped the product team find bugs and the issues that just hadn’t come up in early testing or at a large scale. It was a win-win situation”

John Philpott, principal product manager at Microsoft Digital

For some legacy protocols, usage was low. In these cases, the features being deployed made removing these protocols seamless. Where usage was higher or unclear, a more detailed approach was required.

First, a few thousand users. Then 50,000. Then 100,000. Eventually, the entire Microsoft tenant. We paused between each wave to monitor help desk tickets, gather feedback, and confirm that our mitigation strategies were working.

Communication was critical.

We ran targeted campaigns, sent individual emails, and published technical reports explaining what was changing, why it mattered, and how users could adapt. We even used Viva Engage to notify users directly. It was important to explain to users why longstanding functionalities were being removed. We had to explain what we were doing and how to mitigate any impact.

We did a lot of work with the product team to ensure the user experience and the IT pro experience both exceeded expectations.

“It was a great Customer Zero experience,” says John Philpott, principal product manager within Microsoft Digital. “Our security teams stood to benefit from Baseline Security Mode features, and we helped the product team find bugs and the issues that just hadn’t come up in early testing or at a large scale. It was a win-win situation.”

We flagged inconsistencies in policy syntax, pushed for better error handling, and worked with the product team to align deployment tools across workloads.

But we didn’t stop at deployment. We tracked progress, validated telemetry, and signed off on each feature before it moved into broader rollout. We even helped pave the way for the next iterations, identifying features that needed more design work or deeper telemetry before they could be deployed.

This was a true partnership. The product team built the features. We tested them, validated them, and helped make them better.

Baseline Security Mode is now live across Microsoft. And it’s ready for the world.

Capturing real benefits

Baseline Security Mode is more than a set of policies—it’s a platform for proactive defense.

The product team built it to reduce legacy risks and enforce modern security standards across Microsoft 365 workloads. Microsoft Digital validated it in production, surfacing bugs, shaping telemetry, and confirming that the features worked as intended.

We tested 22 features across Office, OneDrive & SharePoint, Substrate, Identity, and Teams. Each one targeted a specific vulnerability—like blocking ActiveX controls, disabling Exchange Web Services, or enforcing phishing-resistant authentication for admins.

We flagged critical ActiveX dependencies in third-party apps —something the product group hadn’t found—which enabled them to initiate removal. That kind of early detection helped fix issues before the features reached customers.

We found regressions in PowerShell and legacy authentication flows. The OneDrive and SharePoint team caught a high-impact bug and worked with the product team to resolve it.

That validation mattered.

We also helped shape the admin experience.

Exception handling was built into the UI. Admins could create security groups, assign users, and manage exclusions directly in the Microsoft 365 admin center.

“There’s no need to handle everything manually,” Philpott says. “Simply click here and then here to disable. It’s a much simpler process.”

Extending benefits to Microsoft customers

Baseline Security Mode is ready for enterprise.

We’ve tested it. We’ve hardened it. And we’ve made it easier to adopt.

Microsoft Digital’s deployment journey helped shape the product into something customers can trust. We didn’t just validate features—we made sure they worked in real-world environments, across diverse teams, and under the pressure of scale.

 The product team designed the features to be enterprise-ready. We ran them through our Tenant Trust Program and work council reviews to ensure compliance across global regions. That gave us confidence—and gave customers confidence too.

The benefits are clear. We’ve reduced our attack surface. We’ve improved compliance. We’ve made it easier for IT teams to enforce security without disrupting workflows. And we’ve laid the groundwork for secure-by-default computing across Microsoft.

 Customers can do the same.

Start small. Run pilots. Monitor impact. Use the tools in the Microsoft 365 admin center to deploy policies, manage exceptions, and guide users through the change. And don’t be afraid to ask for help—our journey has shown that collaboration between deployment teams and product teams makes all the difference.

Baseline Security Mode is ready, and we’re ready to help others adopt it.

Looking ahead

The first wave of Baseline Security Mode—BSM 2025—delivered 22 features across five major workloads. Microsoft Digital helped validate and deploy those features across the enterprise. And the next wave of features is already in motion.

And it’s bigger, with 46 features, more than double what we had in the first round. The product team is expanding coverage to include deeper protocol restrictions, broader app controls, and more granular authentication policies.

We’re also preparing for broader industry adoption.  

Governments, regulators, and enterprise customers are asking for secure-by-default configurations. Baseline Security Mode is our answer. And the next version will make it even easier to adopt.

We’ll continue to lead as Customer Zero. We’ll test new features, validate insights surfaced by telemetry, and share feedback with the product team. We’ll run pilots, monitor impact, and guide users through the change. And we’ll keep pushing for simplicity, scalability, and trust.

Because security isn’t a one-time project— It’s a mindset, and it’s Microsoft’s highest priority.

Key takeaways

Ready to adopt Baseline Security Mode? Here’s some actions we recommend based on our deployment experience:

  • Start with a pilot: Test Baseline Security Mode with a small group of users to identify legacy dependencies and gather feedback before scaling.
  • Use the Microsoft 365 admin center for deployment: Apply policies and manage exceptions directly through the UI—no scripting required.
  • Identify and plan for exceptions early: Work with business units to understand where legacy formats or protocols are still needed and create security groups for exclusions.
  • Communicate proactively with users: Launch campaigns to explain upcoming changes, their impact, and how users can adapt.
  • Validate telemetry and error reporting: Ensure your environment captures enough data to monitor the impact of new policies and troubleshoot effectively.
  • Engage your compliance and governance stakeholders: Review new policies with internal governance teams to ensure alignment with organizational and regional standards.
  • Treat security as an ongoing journey: Continue to monitor, iterate, and evolve your security posture as new threats and features emerge.

The post Hardening our digital defenses with Microsoft Baseline Security Mode appeared first on Inside Track Blog.

]]>
20811
Enterprise AI maturity in five steps: Our guide for IT leaders http://approjects.co.za/?big=insidetrack/blog/enterprise-ai-maturity-in-five-steps-our-guide-for-it-leaders/ Thu, 09 Oct 2025 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20387 Charting a course through today’s digital landscape means navigating the transformative potential of AI—a technology redefining how organizations innovate and adapt. For leaders seeking to turn the promise of AI into action, the journey begins with clarity of purpose and a framework for progress. At Microsoft Digital, the company’s IT organization, we’ve been on the […]

The post Enterprise AI maturity in five steps: Our guide for IT leaders appeared first on Inside Track Blog.

]]>
Charting a course through today’s digital landscape means navigating the transformative potential of AI—a technology redefining how organizations innovate and adapt. For leaders seeking to turn the promise of AI into action, the journey begins with clarity of purpose and a framework for progress.

At Microsoft Digital, the company’s IT organization, we’ve been on the front lines of this AI-powered revolution, translating vision into reality and reimagining what’s possible for the enterprise.

A photo of Fielder

“We’ve learned so many lessons over the past few years building AI-powered solutions and driving an AI-forward culture. We’re excited to share them with our customers and partners so they can learn from our journey.”

As generative AI leapt into the mainstream with the arrival of models like OpenAI’s GPT-3.5 and transformative tools such as Microsoft 365 Copilot, the stakes for IT leaders have never been higher.

The challenge isn’t just about deploying the latest AI tools—it’s about architecting a foundation for sustained, responsible, and scalable change across the enterprise.

That’s where this guide comes in. We’re opening a window into our own AI evolution—sharing our hard-won lessons, proven frameworks, and actionable steps that can help you steer your organization from AI exploration to AI acceleration. Whether you’re just beginning your journey or ready to scale enterprise-wide adoption, this guide is built to empower you to make informed decisions, sidestep common pitfalls, and unlock the full promise of AI-driven transformation.

“We’ve learned so many lessons over the past few years building AI-powered solutions and driving an AI-forward culture,” says Brian Fielder, vice president of Microsoft Digital. “We’re excited to share them with our customers and partners so they can learn from our journey.”

Enterprise IT maturity

This article is part of series on Enterprise IT maturity in the era of agents. We recommend reading all four of these guides for a comprehensive view of how your organization can transform with AI to become a Frontier Firm.

  1. Becoming a Frontier Firm: Our IT playbook for the AI era.
  2. Enterprise AI maturity in five steps: Our guide for IT leaders (this story).
  3. The agentic future: How we’re becoming an AI-first Frontier Firm at Microsoft.
  4. AI at scale: How we’re transforming our enterprise IT operations at Microsoft.

Read on to discover how we moved from AI vision to AI reality here in Microsoft Digital. You’ll learn how you can drive measurable business outcomes while building a culture that’s ready for what’s next.

The five stages of AI-powered transformation

We have led Microsoft through five stages of AI maturity—from initial exploration to becoming an AI-driven enterprise. This has been a three-year journey, and you and your digital leaders will need to be prepared to take time to fully experience each of these stages to truly unlock the potential of AI to transform your enterprise.

What follows is a stage-by-stage summary of how we achieved our transformation, followed by a list of empowering actions you can take to help you on your own journey.

Mapping our journey to AI maturity

Our five stages of AI maturity reflect our increasingly sophisticated enterprise AI capabilities. The icons in each step represent different capabilities as we move from simple foundational AI elements to advanced, interconnected agentic AI representations.

Stage 1: Awareness and foundation

Set a bold vision for your AI journey, anchored in clear business outcomes—avoid implementing “AI for AI’s sake.” Engage your executive sponsors early and form an AI Center of Excellence (CoE) to foster cross-functional collaboration and empower experimentation. Establish Responsible AI principles alongside your organization’s ethics team and assess your data readiness from the start—remember, “no AI without data.” By building these foundations, you’ll position your teams to confidently launch AI initiatives and drive meaningful transformation.

Target outcomes

A foundational strategy, governance principles, and leadership buy-in to kickstart AI projects.

“At the Microsoft Digital AI Center of Excellence, we’ve learned that combining strong governance, data readiness, and a continuous-improvement mindset transforms AI pilots into enterprise-scale solutions,” says Nitul Pancholi, the AI CoE lead in Microsoft Employee Experience. “This guide distills our three-year journey into clear, actionable steps to accelerate responsible AI adoption, mitigate risk, and drive measurable business impact.”

Stage 2: Active pilots and skill building

To accelerate your AI journey, start by launching targeted pilot projects across diverse areas of your organization—think automated support chatbots or network analytics. Encourage experimentation and leverage hackathons to surface a broad range of ideas. Narrow these down to your most promising initiatives by evaluating business value against implementation effort and focus resources on a select group of high impact “big bets.”

Empower your teams by investing in upskilling: offer discipline-aligned learning paths, issue digital credentials, and celebrate progress to foster a culture of continuous learning and knowledge-sharing. Establish early-stage governance by requiring all pilots to undergo Responsible AI and architectural reviews. By following these steps, you’ll create early momentum, build internal expertise, and identify the AI solutions most likely to drive meaningful impact at scale.

Target outcomes

The first tangible benefits of AI: efficiency gains, time and cost savings, and quality improvements, and an internal talent pool emerging, paving the way to scale successful solutions.

Stage 3: Operationalize and govern

To scale and integrate AI solutions across your organization, move beyond pilot projects by deploying AI solutions directly into production and embedding them within core business workflows.

Strengthen your data and AI infrastructure—consider implementing a unified data platform and robust Machine Learning Operations (MLOps) pipelines—to support this transition. Formalize enterprise governance with clearly defined steering teams: empower your AI Center of Excellence to accelerate implementation and establish a Data Council to ensure data quality and “AI-ready” assets and a Responsible AI Office to oversee ethical use and compliance. Encourage collaboration among these groups and designate domain leads to ensure your AI initiatives consistently deliver tangible business value.

By putting these practices in place, you can drive successful scaling and operationalization of AI throughout your enterprise.

Target outcomes

Multiple AI use cases running at enterprise scale under robust oversight with cross-functional alignment on AI objectives and the business value they’re delivering.

Stage 4: Enterprise-wide adoption

To consolidate your gains and achieve AI adoption across the enterprise, make AI a core consideration in every new project and process.

Ask where AI-driven intelligence can deliver real impact, whether by boosting efficiency, enhancing user experiences, or unlocking new business value. Align AI initiatives with your organization’s strategic goals by empowering business leads to synchronize efforts and continuously update your AI roadmap. Cultivate a data-driven culture through ongoing, large-scale training and make AI tools a natural part of everyday work. Establish rigorous impact tracking with clear metrics for value delivered—such as time savings, cost reduction, and quality improvements—and review these outcomes regularly at the leadership level to maintain accountability.

By integrating these practices, you can drive AI adoption throughout your organization and ensure sustained, measurable impact.

“What’s unique about our approach is that every agent is engineered for responsible action. We design agents to operate within enterprise workflows, guided by policy-aware controls, telemetry integration, and human oversight,” says Faisal Nasir, the AI CoE and Data Council lead in Microsoft Employee Experience.

Through the AI Center of Excellence and the Data Council, we ensure agents are grounded in AI-ready data and undergo comprehensive architecture and governance reviews.

“This ensures our AI solutions are not only intelligent, but also accountable, governable, and fully production-ready,” Nasir adds.

Target outcomes

AI is a pillar of your operational strategy, backed by a data-driven culture and continuous monitoring of business impact.

Stage 5: Transform your business with agentic AI

To drive a lasting AI-powered business transformation, organizations must embed AI into every aspect of their operations and culture.

Start by leveraging the expertise of your AI CoE to foster innovation, drive continuous improvement, and keep your AI initiatives evolving. Use structured mechanisms like a Kaizen funnel to crowdsource, prioritize, and advance ideas that extend the impact of AI across the enterprise.

Strengthen governance to address the advanced challenges of agentic applications, including responsible scaling of generative AI and effective mitigation of AI hallucinations. Focus on refining human-AI collaboration so your teams are empowered to offload routine tasks to AI agents and concentrate on higher-value work.

Another tactic that’s been highly successful in Microsoft Digital is “Fix, Hack, Learn” weeks, where employees are encouraged to identify opportunities to improve our services. Multi-disciplinary teams are empowered to innovate with AI to improve our organizational effectiveness, yielding multiple AI-powered breakthroughs that are already in production.

“In Microsoft Digital, continuous improvement is a driving force behind our AI transformation,” says Don Campbell, principal product manager within Microsoft Digital and member of our AI Center of Excellence. “By embedding it and AI into every layer of our operations, we’re not only optimizing how we work today, but we are also strategically preparing our processes to become agentic tomorrow. This disciplined approach ensures that when we make a process agentic, it’s not just automated—it’s intelligent, secure, and purpose-built to scale across the enterprise.”

Target outcomes

An organization transformed by AI, achieving significant efficiency gains and innovations, and recognized as a leader in enterprise AI adoption.


What our experts have to say:

A photo of Campbell

“In Microsoft Digital, continuous improvement is a driving force behind our AI transformation. By embedding it and AI into every layer of our operations, we’re not only optimizing how we work today, but we are also strategically preparing our processes to become agentic tomorrow.”

Don Campbell, principal product manager and CoE member, Microsoft Digital

A photo of Pancholi

“At the Microsoft Digital AI Center of Excellence, we’ve learned that combining strong governance, data readiness, and a continuous-improvement mindset transforms AI pilots into enterprise-scale solutions. This guide distills our three-year journey into clear, actionable steps to accelerate responsible AI adoption, mitigate risk, and drive measurable business impact.”

Nitul Pancholi, AI Center of Excellence lead, Microsoft Employee Experience

A photo of Nasir

 “What’s unique about our approach is that every agent is engineered for responsible action. We design agents to operate within enterprise workflows, guided by policy-aware controls, telemetry integration, and human oversight.”

Faisal Nasir, AI CoE and Data Council lead, Microsoft Employee Experience


Enabling success—lessons from our journey as the company’s IT organization

Achieving AI maturity is dependent on a combination of technological, organizational, and cultural factors. These enablers support the successful adoption and integration of AI within the organization.

For IT decision-makers charting the course to enterprise-scale AI, the journey is about far more than technical implementation—it’s about activating the right enablers to unlock both rapid and sustainable business impact.

Successfully scaling AI means orchestrating executive vision, robust governance, responsible innovation, resilient data foundations, and a culture of empowered talent—all working in harmony. Each of these levers is crucial not only for accelerating the path from pilot to production, but also for ensuring that every AI initiative delivers measurable outcomes, mitigates risk, and creates lasting organizational value.

By prioritizing these foundational pillars, IT leaders can fast-track value realization, embed accountability, and transform AI from a promising experiment into a strategic engine for competitive advantage. The following items explore the essential enablers that drive AI maturity at pace and why they matter now more than ever for organizations determined to lead in the age of intelligent transformation.

Seven enablers of enterprise AI transformation

Executive sponsorship and governance

To accelerate AI maturity within your organization, start by securing strong executive sponsorship and establishing clear governance structures. Appoint dedicated AI leaders and form cross-functional teams such as an AI Center of Excellence and supporting councils with well-defined roles and responsibilities. Maintain alignment with your business strategy through regular steering meetings and roadmap reviews. This approach will ensure your AI initiatives remain focused, impactful, and strategically integrated across the enterprise.

Responsible AI by design

To embed ethics and effectively manage risk in every AI project, integrate Responsible AI principles from the outset. Establish a Responsible AI Council or similar oversight group to ensure all solutions are rigorously reviewed for ethical standards before launch. By instituting mandatory Responsible AI assessments, you’ll foster trust, safeguard your organization, and address potential issues proactively—setting a strong foundation for sustainable AI adoption. This not only reduces reputational and regulatory risk, it also enables faster adoption, strengthens stakeholder confidence, and ensures AI initiatives deliver lasting value aligned with your business goals.

Data foundation, architecture reviews, and technical readiness

Treat data as a strategic asset by establishing a unified data strategy—start with a Data Council to catalogue key sources, improve data quality, and implement robust governance and access controls. Build AI-readiness across your enterprise by embedding architecture reviews and design validation into your engineering lifecycle, ensuring every solution is scalable, composable, and compliant by design. Leverage architecture forums to crowdsource feedback, align on technical standards, and promote reusable patterns that accelerate delivery. With secure cloud environments, ML Ops pipelines, and standardized AI platforms in place, your teams will be equipped to develop and scale AI solutions quickly, safely, and consistently.

Talent, skills, and culture

To build an AI-ready workforce and foster a culture of innovation, prioritize company-wide training and upskilling programs that elevate AI literacy at every level. Establish a Center of Excellence and empower “AI champions” within teams to drive adoption and celebrate meaningful impact. Encourage open collaboration—share code, best practices, and project outcomes across your organization—to accelerate learning and scale success. By breaking down silos and enabling employees to experiment with intelligent solutions, you’ll create the environment needed for sustained growth and enterprise-wide transformation. In Microsoft Digital, we are not just training our employees to use AI, we are empowering them to co-create the future of their roles. When employees are empowered to build and govern their own agents, that is when transformation truly scales.

Impact tracking and accountability

To drive meaningful business impact with AI, start by defining clear, measurable success metrics—think hours saved, cost efficiencies, and quality improvements—that can be rolled up into an organizational AI scorecard. Review these outcomes regularly at the leadership level to keep the focus on what matters. For every major AI initiative, assign an accountable owner who champions the solution, communicates the business story, and manages performance reporting.

Foster transparency by consistently comparing targets to actual results and openly sharing lessons learned when goals are missed. By embedding accountability into your rhythm of business, you’ll enable agile decision-making, concentrate your efforts where AI delivers the most value, and nurture a culture of continuous improvement. In Microsoft Digital, we’ve defined an AI value measurement framework with six dimensions of value that you can use as benchmarks to determine the impact of your own investments.

Change management and communication

To drive successful AI adoption, treat it as a people-first transformation—not just a technology deployment. Start by developing robust deployment and adoption plans for your key solutions: invest in training, craft clear communications, and establish dedicated support channels such as FAQs and help desks. Maintain a steady pulse of communication with your stakeholders—consider newsletters, interactive town halls, and a centralized library of AI success stories to celebrate impact and progress. By prioritizing transparency and providing ongoing support, you’ll smooth the path to change, encourage enthusiastic adoption, and sustain momentum throughout your organization.

Continuous improvement, innovation, and partnerships

To drive continuous improvement and innovation with AI, keep a dynamic backlog of opportunities and support each with a clear value case and refresh your pipeline regularly. Adopt structured forums such as continuous improvement and Kaizen events to identify, evaluate, and prioritize new AI use cases that deliver tangible business outcomes. Use a robust prioritization framework to ensure focus on initiatives with the greatest impact.

Identify partner teams who can serve as early adopters and provide feedback to inform your continuing journey. By building a disciplined innovation pipeline and fostering a collaborative ecosystem, you create a foundation for ongoing experimentation, accelerated learning, and sustainable AI innovation across your organization.

Advancing your organization into the frontier of AI

To embrace the next era of AI, it’s time to look beyond traditional automation and prepare your organization for agentic AI frameworks and autonomous, interoperable agents. These advanced systems aren’t just digital assistants—they’re designed to plan, act, and collaborate across workflows with minimal intervention, offering the potential to fundamentally transform how work gets done.

Start by identifying areas where agentic AI can drive real business value. Empower domain experts within your teams to become Agent Leaders—individuals who can design, oversee, and govern agent ecosystems at scale. Align your AI strategy with forward-looking industry insights and best practices—sources like the 2025 Annual Work Trend Index: The Frontier Firm Is Born offer invaluable guidance for responsible AI adoption and organizational transformation.

Recognize that the impact will be significant. Industry analysts such as Gartner predict that by 2028, about a third of enterprise applications will feature agentic AI capabilities and over 15% of daily work decisions will be handled by AI agents.

Evolving from large language models to agents

Illustration showing how AI's task complexity capability increases as you move from single LLMs, to single agents (LLMs plus tools), to multiple agents working together.
Fully autonomous workflows powered by multiple agents are the future of work.

To get ahead, foster a culture of experimentation. Host hackathons, pilot agentic AI prototypes, and develop governance frameworks that ensure responsible management of these emerging technologies. Treat your AI journey as a continuous process—a growth mindset and incremental progress are key. As AI evolves, so should your practices: be ready to adapt your governance, refine human-AI collaboration, and embrace new paradigms like fully autonomous agents.

Each stage of this journey unlocks new possibilities. Ensure your organization remains at the forefront of AI maturity by committing to continuous improvement and innovation. The future of work isn’t a destination—it’s a dynamic path. Evolve your strategy, cultivate expertise, and enable your teams to thrive in the rapidly advancing digital landscape, powered by AI innovation and continuous improvement.

Key takeaways

To help your organization progress on its AI journey, consider the following strategies:

  • Invest in data infrastructure and AI platforms. Building robust data infrastructure ensures your organization is prepared to leverage AI, supporting scalable, innovative, and secure AI-driven solutions.
  • Foster a culture of innovation and collaboration. Champion an AI-forward culture where innovation and collaboration drive the adoption of agentic AI.
  • Develop AI expertise through training and development. Upskilling your teams empowers them to navigate the rapid advances of AI, drive innovation, and ensure your organization stays competitive as agentic AI transforms workflows and business outcomes across every industry.
  • Align AI initiatives with strategic business goals. Ensuring AI initiatives align with business goals maximizes impact and positions your organization to succeed in the rapidly evolving world of agentic AI.
  • Implement ethical AI practices based on Microsoft’s Responsible AI Principles. Adopting ethical AI practices builds trust, ensures responsible innovation, and prepares your organization to navigate the evolving landscape as AI becomes central to business operations and decision-making.

The post Enterprise AI maturity in five steps: Our guide for IT leaders appeared first on Inside Track Blog.

]]>
20387