Employee experience Archives - Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/tag/employee-experience/ How Microsoft does IT Wed, 15 Apr 2026 23:57:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 137088546 Becoming a Frontier Firm: A guide for deploying AI agents based on our experience at Microsoft http://approjects.co.za/?big=insidetrack/blog/becoming-a-frontier-firm-a-guide-for-deploying-ai-agents-based-on-our-experience-at-microsoft/ Thu, 16 Apr 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22868 A how-to guide for governing, implementing, adopting, supporting, and measuring the impact of AI agents from Microsoft Digital, the company’s IT organization. The agentic future: Our journey to becoming a Frontier Firm at Microsoft A new way of working, a modern way to achieve more Engage with our experts! Customers or Microsoft account team representatives […]

The post Becoming a Frontier Firm: A guide for deploying AI agents based on our experience at Microsoft appeared first on Inside Track Blog.

]]>

A how-to guide for governing, implementing, adopting, supporting, and measuring the impact of AI agents from Microsoft Digital, the company’s IT organization.

The agentic future: Our journey to becoming a Frontier Firm at Microsoft

A new way of working, a modern way to achieve more

The rate of change for AI tools and technology continues to accelerate, and new opportunities to reimagine business processes and employees’ day-to-day workflows are emerging. Agents are the driving force behind this next leap forward.

As a result of this technological shift, a new organizational blueprint is emerging. It blends machine intelligence with human judgment to create systems that are AI-operated but human-led.

We have a name for an organization that enacts this model: The Frontier Firm.

As organizations progress toward this goal, they move from foundational AI assistance through escalating levels of agentic maturity and complexity. First, humans operate with help from an AI assistant like Microsoft 365 Copilot. Then, human-agent teams work together. But the future lies in humans leading teams of agent users: AI agents that perform core labor with relative autonomy.

Pattern 1: Human with assistant—every employee has an AI assistant that helps them work better and faster.
Pattern 2: Human-agent teams—agents join teams as “digital colleagues,” taking on specific tasks at human direction.
Pattern 3: Human-led, agent-operated—humans set direction, and agents execute business processes and workflows, checking in as needed.

This has been a three-year process for us at Microsoft, and throughout our journey, we’ve had to allow adequate time for deliberate planning and careful execution. Just as importantly, we invested early in clear, consistent internal communications to help employees understand what agents are, why they matter, and how they could safely participate in building them. That shared understanding created the confidence and momentum required to scale agent creation across a global workforce.

“It’s a truly transformative time,” Brian Fielder, vice president of Microsoft Digital. “What we’ve learned from embracing the agentic future at Microsoft is only making us more eager to see organizations empower their employees to take the lead in a world where human judgment and machine intelligence work in harmony.”

Our Frontier Firm journey so far

Within Microsoft Digital, the company’s IT organization, we’re taking a leadership role in reimagining core processes and workflows. These efforts rest on four pillars of practice:

  • We envision and implement the AI-first workplace of the future.
  • We empower our employees to build their own agents that help supercharge their productivity by providing the training, resources, and inspiration they need.
  • We define guardrails and safeguard our environment so our employees can maximize the power of AI while keeping our enterprise safe and secure.
  • We’re the voice of company’s internal AI transformation, and we provide the blueprint for our customers to accelerate their own AI journeys.

To guide our steps, we’ve established a cross-disciplinary initiative we call Agents at Microsoft. We’re looking at agentic transformation from an end-to-end perspective that reaches into every aspect of building, publishing, governing, managing, and getting the most value out of agents.

Six pillars of the workstreams involved with the Agents at Microsoft initiative: Strategy and value realization, analytics, accelerators, change management, governance, and publish and lifecycle.
Our Agents at Microsoft initiative represents part of a 360-degree approach to agentic maturity. These six pillars each represent a distinct workstream, each with its own accountable team.

As we’ve incorporated agents into more and more aspects of our organization, key questions have surfaced:

  • How do we balance freedom for employees to create agents against the need to manage sprawl?
  • How do we put guardrails around agentic capabilities so they can be useful, without introducing undue risks?
  • How do we differentiate between agents of different complexity and capability, and how do we adjust our strategies around them accordingly?
  • Where can we use agents to fill enterprise functions, and who should be responsible for creating those crucial tools?
  • How can we adapt existing software development standards to AI tools?
  • How can we minimize the risk of data over-exposure through AI?

It’s possible you’re also considering where agents fit into your organization. If so, it’s likely that you’re wrestling with many of the same questions. We’re here to help.

This guide shares our experience as Customer Zero for agents at Microsoft. As you read, you’ll be able to follow our journey to defining what it means to govern agents safely, implement them effectively, guide their adoption by employees, build a foundation for support, and track their impact through effective measurement.

We’ll share some of the most important lessons we’ve learned so far, along with readiness checklists and resources that can help you advance agentic maturity at your organization. With this guide in your toolkit, you’ll have a framework for building a strategy that incorporates agents into your business goals safely, responsibly, empathetically, and impactfully.

“As we harness the transformative power of AI agents, it’s our responsibility in IT to ensure that technology not only enhances decision making but also fosters a culture of innovation and collaboration across the organization,” says Stephan Kerametlian, a business program management senior director in Microsoft Digital.

The agentic future is here. We’ve explored the path forward, and we’ve seen the exciting places it leads. This guide can help you take your first steps and start realizing those possibilities today.


Expert insights

A photo of Fielder.

“It’s a truly transformative time. What we’ve learned from embracing the agentic future at Microsoft is only making us more eager to see organizations empower their employees to take the lead in a world where human judgment and machine intelligence work in harmony.”

Brian Fielder, vice president, Microsoft Digital

A photo of Kerametlian.

“As we harness the transformative power of AI agents, it’s our responsibility in IT to ensure that technology not only enhances decision-making but also fosters a culture of innovation and collaboration across the organization.”

Stephan Kerametlian, business program management senior director, Microsoft Digital


Chapter 1: Advancing good governance to meet the agentic moment

Maintaining privacy, security, and compliance while respecting regulatory frameworks

Agents offer powerful opportunities to enhance employee productivity, but they also introduce concerns. For example, how do we keep privileged information where it belongs? And how do we keep employees from building agents that violate company policies?

In answering these questions, Microsoft Digital’s governance team focused on the value the company is trying to derive from agents.

We wanted to give employees and teams the freedom to build without risk to the business or introducing agent duplication and sprawl. We wanted to weave robust, reliable agentic experiences into enterprise workflows. We also needed to secure and protect confidential data while respecting responsible AI principles.

“Our principles haven’t changed, but they’ve evolved,” says David Johnson, a tenant and compliance architect at Microsoft Digital. “With AI, the need for proactive governance is far greater than ever before, so we’re putting structures in place that take some of the labor around managing agents off of IT.”

There are some cornerstone constructs that underpin our agent governance strategy. There’s a tenant that holds employees accountable, a reasonably clean data estate, a lifecycle for the agents users-they disappear when the employee leaves. 

We’ve developed six core principles to guide our approach to governing agents:

  1. We ensure a strong data hygiene foundation so we can trust our data estate as employees build and use agents.
  2. We empower employees to build personal agents that can access services and data sources those users can already access to help automate and accelerate their tasks.
  3. We empower teams and lines of business to build agents with known lower risk patterns to accelerate impact.
  4. We provide a smooth release path for engineering teams to develop agents designed for enterprise functions so they can access all of the services and sources they need.
  5. We accelerate innovation through agent and automation templates while maintaining an AI Center of Excellence (CoE) to help teams think through their opportunities.
  6. We reimagine employee experiences and task execution to simplify and optimize productivity.

As a result of our experience establishing strong governance for Microsoft 365 Copilot, we’d already laid a firm foundation for an agent-ready data estate. In some ways, governance is tool-agnostic, rooted in basic principles. With appropriate data labeling, data hygiene, and well-managed permissions in place alongside tools that respect labels by default, we can confidently give every employee the ability to build basic agents and trust in our governance guardrails.

A matrixed approach to agent governance

The sheer diversity of agents and their use cases means we need a multifaceted approach to governance. A matrix of different parameters applies to any agent, and each of those elements requires its own approach to policy.

In practice, agent governance structures echo our overall maturity approach. Simple, personal, lower-risk agents with built-in guardrails act as a starting point for employee experimentation and require very little oversight. As a result of our robust data hygiene foundation, if an employee has access to the grounding content, these agents are low-risk accelerators for things they can already do on their own. Meanwhile, higher-impact agents demand greater attention that echoes our security development lifecycle (SDLC) for internal apps, which include more extensive, cross-disciplinary reviews.

SharePoint, Agent Builder in Microsoft 365, Copilot Studio, and Copilot Studio + Microsoft 365 Agents Toolkit and the level of agent governance required for each.
Our matrixed model for agent governance spans low-complexity, low-risk agents as well as more advanced tools created by professional developers.

To accommodate agent-creation experiences across this spectrum, we’ve enabled several different building platforms and processes employees and teams can use to create the AI tools they need.

  1. We opened up Agent Builder in Microsoft 365 Copilot for all employees to create read-only declarative agents.
  2. We created an environment strategy and governance in Power Platform to manage personal environments featuring data connectors with lower risk but high value.
  3. We enabled a process to flow the data that teams need into production Power Platform environments featuring data connectors. These agents initially come with sharing limits until the agent receives risk approval.

This structure provides the ability to safely create agents of increasing complexity while ensuring they remain secure and contained until they get the necessary reviews for wider sharing and data exposure.

Our governance guardrails, review policies, and publishing scope varies based on the tool used to create an agent, the level of technical proficiency it requires, its grounding in knowledge sources, its capabilities, the actions it can take, the plug-ins it requires, and whether it includes a custom engine or a bring-your-own model.

The following examples illustrate two different agent scenarios:

An employee builds a knowledge-only agent using Agent Builder in Microsoft 365 Copilot.

This agent features graph connectors from a pre-approved catalog for exposing additional data, easily created using no-code tools. Its knowledge sources are limited to SharePoint and OneDrive sites accessible to the employee, along with external websites, custom instructions, and additional internal sources through graph connectors. As a result, the risk of data overexposure is limited. These agents can’t take action, they don’t rely on plug-ins, and they’re tied to our data hygiene foundation. The employee can only use the agent personally or share it through a link.

No review necessary: Our team in Microsoft Digital honors reactive take-down requests like any other self-service construct, but does not provide proactive gating.

Professional developers build an agent to manage enterprise workflows.

Agents created using pro-code tools can include custom connectors and orchestration logic to handle more complex scenarios, and their builders typically intend them to become Microsoft Teams apps or part of our agent catalog for wide organizational use. Their knowledge sources can be almost anything, from internal SharePoint sites to third-party apps, so they’ll often need to make use of APIs. For these apps, knowledgeable builders can create custom Azure OpenAI large language models (LLMs).

Reviews: These agents require reviews for security, privacy, accessibility, responsible AI, and an environment-specific maker stack review. This review stage is essential because these agents can potentially transform or write data outside their places of origin. These capabilities represent both the power of agents and the risk we need to evaluate.

As you consider your own governance structures and policies, think about where agents and the ability to create them fit your needs and risk tolerance. Then learn from the different parameters of our governance matrix to access a working model for your own agentic transformation.


Expert insights

A photo of Johnson.

“Our principles haven’t changed, but they’ve evolved. With AI, the need for proactive governance is far greater than ever before, so we’re putting structures in place that take some of the labor around managing agents off of IT.”

David Johnson, tenant and compliance architect, Microsoft Digital

A photo of Hasan.

As you consider your own governance structures and policies, think about where agents and the ability to create them fit your needs and risk tolerance. Then learn from the different parameters of our governance matrix to access a working model for your own agentic transformation.

Aisha Hasan, Power Platform and Copilot Studio product manager, Microsoft Digital


Balancing utility and manageability in our agent ecosystem

Empowering employees and teams to simply and securely create agents has been a top priority as we move toward AI maturity at Microsoft, but we also want to eliminate agent sprawl.

Aside from complicating agent management, sprawl has several user-side disadvantages. For example, if more than one team were to create an agent that points to HR information, the employee experience would suffer, because our users wouldn’t be sure which agent serves as the authoritative source of truth.

Our team in Microsoft Digital partners with other internal organizations to ensure we’re prioritizing the right agent development projects and avoiding agent sprawl. Ideally, these engagements take place before teams start building their agents so we can avoid wasted effort or duplicate work.

If a pre-existing agent fits the target scenario, we encourage a team to use that agent instead of creating a redundant solution. For employees who want to create their own agents, we recommend that they first search for an existing tool in our agent catalog to avoid duplication.

User-based lifecycles and periodic attestation are also key pieces of the puzzle. Requiring attestation helps ensure that agents cease to exist once they’re no longer useful or their owner leaves the company.

The release of Microsoft Agent 365, now in early access, represents the next step forward in agent observability and management, two key aspects of agent governance and sprawl mitigation. This control pane for agents incorporates many of Microsoft’s Digital’s learnings as we’ve bridged governance gaps through IT intervention.

  • The registry provides a complete view of agents. The enterprise agent store makes it easy to find the right agents for each role and business process within familiar workflows in Microsoft 365 Copilot and Teams.
  • Visualization provides the observability layer, including role-specific oversight, compliance and audit features, and performance measurement that can help organizations track their agents’ impact and see where they contribute value.
  • Interoperability ensures Agent 365 is open to any Microsoft-built or partner ecosystem, while also delivering work intelligence through access to data and Microsoft 365 apps.
  • Security features provide crucial confidence through visibility into security posture, detection and response capabilities, and intelligent runtime defense.

“The next step in our governance journey will be using AI to help us govern AI,” says Aisha Hasan, Power Platform and Copilot Studio product manager at Microsoft Digital. “We’re looking at ways AI can help us manage this new space, and we believe Agent 365 will be the foundation for our deterministic approach to governance.”

As you strategize to deepen AI maturity at your organization, our experience will help you operationalize many of the aspects of governance we’ve pioneered as Customer Zero for agentic AI, especially with the wide release of Agent 365. By adopting the principles we’ve illustrated in this chapter, you can accelerate your transformation and advance your maturity rapidly and securely.

Learning from our experience with agent governance

A strong data foundation is crucial

We’ve built respect for labeling and data governance policies into the tooling for AI assistants and agents, but it’s dependent on a well-governed data estate. Invest time and effort in establishing that foundation.

Decide on your comfort level with risk

Bring cross-disciplinary experts together from across your organization to determine what level of risk is acceptable for different agents and their use cases. Put guardrails in place for low-risk scenarios and establish processes for supporting more complex or sensitive use cases. Evaluate what data sources agents can extract information from. Do you have confidence that users haven’t over-shared data access?

Agents aren’t always like applications—adjust your processes accordingly

We quickly learned that reasonable processes, approvals, and workflows for internal application development didn’t scale well with agents. Consider a risk-based assessment model.

Change is constant

Plan to reassess and revise your governance structure regularly. This technology is evolving rapidly, as is the tooling surrounding it, so maintaining good governance will be an ongoing practice.

Governance is a value driver for employees

Governance isn’t just about protecting your organization. It also provides the right patterns to make sure your employees are getting value from agentic technology. Establish strong measures of value and a robust pane for management and assessment. Observability and telemetry will be foundational, so ensure you build that into your governance efforts.

Continue non-agentic workstreams

Enterprise technology environments are additive and incremental. Don’t cease your efforts to create and govern other internal technologies. Instead, maintain a holistic ecosystem.

Key takeaways

Use these tips based on what we learned here at Microsoft to tackle agent governance at your company:

  • Establish a cross-disciplinary agent center of excellence: Bring together stakeholders across the organization to define priorities, goals, and shared practices for agent adoption.
  • Put strong data and information protection policies in place: Establish clear governance for your data estate, including labeling and information protection, to support responsible agent use.
  • Right-size oversight based on risk: Determine your organization’s risk tolerance and define which agents require more or less involvement from IT, security, and compliance teams.
  • Define a clear agent building tool strategy: Decide which tools employees and teams can use to create agents, balancing empowerment with governance.
  • Operationalize agent oversight and management: Establish an oversight model and implement tools like Agent 365 that help manage agents at scale.
  • Create a centralized governance and information hub: Provide employees and agent builders with a single place to find guidance, standards, and governance information.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 2: The Microsoft roadmap for implementing agents

Developing a plan to advance AI maturity while unlocking agentic value at every level of our organization

Implementing agents across your organization is intertwined with your larger AI transformation efforts. At Microsoft, we’ve adopted an escalating maturity model that unfolds across five stages.

Graphic showing the five stages of the Microsoft AI maturity model: awareness and foundation, active pilots and skill building, operationalize and govern, enterprise-wide adoption, and transformation with agentic AI.
AI maturity starts with simple awareness and foundational usage, then progresses to more complex patterns of interaction between humans and agents.

Putting the Microsoft AI maturity model into practice

Whatever stage you’re at in your AI journey, you’ll likely experience many of the same challenges and opportunities we do at Microsoft.

Stage 1: Awareness and foundation

Building a foundation means setting a bold vision for your AI journey, anchored in clear business outcomes. At this stage, it’s important to engage your executive sponsors early to foster cross-functional collaboration and empower experimentation.

At Microsoft, we established our AI Center of Excellence (CoE) to help guide and drive adoption of Microsoft 365 Copilot, as well as a Data Council that powers our AI-ready data strategy. As we’ve moved into the agentic future, these teams have been instrumental in maintaining forward momentum.

The company also established the Office of Responsible AI (ORA) to advance AI development, deployment, and secure and trustworthy innovation through governance, legal expertise, internal practice, public policy, and guidance on sensitive uses and emerging technology. ORA partners closely with product and engineering teams alongside other trust domains like privacy, digital safety, security, and accessibility to align our work with Microsoft’s six responsible AI principles:

  • Fairness
  • Reliability and safety
  • Privacy and security
  • Transparency
  • Accountability
  • Inclusiveness

Target outcomes include

A foundational strategy, governance principles, and leadership buy-in to kickstart AI projects.

Stage 2: Active pilot programs and skill building

We started by launching targeted pilot projects across different areas of the company. This process encouraged experimentation and used hackathons to surface a broad range of ideas. From there, we selected the most promising initiatives by evaluating business value against implementation effort and focused resources on a select group of high-impact projects.

To establish early-stage governance, we required all pilots to undergo responsible AI and architectural reviews.

Target outcomes include

The first tangible benefits of AI, including efficiency gains, time and cost savings, quality improvements, and an emerging internal talent pool that paves the way to scale successful solutions.

Stage 3: Operationalize and govern

At this point, we worked to scale and integrate AI solutions across the company. We strengthened our data and AI infrastructure to support this transition by formalizing enterprise governance with clearly defined steering teams. Our AI CoE, Data Council, and Office of Responsible AI helped accelerate implementation, ensure the ongoing quality of structured data, and oversee ethical AI use and compliance. Collaboration among these groups was crucial for ensuring our AI initiatives remained within acceptable bounds while delivering tangible business impacts.

Target outcomes include

Multiple AI use cases running at enterprise scale under robust oversight, with cross-functional alignment on AI objectives and the business value they’re delivering.

Stage 4: Enterprise-wide adoption

To consolidate our gains and achieve AI adoption across the enterprise, we prioritized making AI a core consideration in every new project and process by asking where AI-driven intelligence could deliver real impact. That could be by boosting efficiency, enhancing user experiences, or unlocking new business value. From there, we aligned our AI initiatives with our organization’s strategic goals by empowering business leads to synchronize efforts and continuously update our AI roadmap.

We also cultivated a data-driven culture through ongoing, large-scale training while making AI tools a natural part of everyday work. To accomplish that, we established rigorous impact tracking with clear measurement of the amount of value delivered. Key metrics include time savings, cost reduction, and quality improvements. We reviewed these outcomes regularly at the leadership level to maintain accountability.

Our Continuous Improvement CoE has been instrumental in the process of aligning AI initiatives with our organizational goals and providing a framework for progress. It operates according to four principles:

  1. A clear definition of winning, based on expectations
  2. Disciplined execution
  3. Constrained problem-solving with urgency
  4. Sustained replication and acceleration

Target outcomes include

Measurable, data-driven monitoring of AI for your business that’s powered by a continuous improvement mindset.

Stage 5: Transforming your business with agentic AI

At stage five, we’ve been working to embed AI into every aspect of our operations and culture. We started by leveraging the expertise of our AI CoE to foster innovation, drive continuous improvement, and keep our AI initiatives evolving using structured mechanisms like a Kaizen funnel to crowdsource, prioritize, and advance ideas that extend the impact of AI across the enterprise.

We also further strengthened governance to address the advanced challenges of agentic applications, including responsible scaling of generative AI and effective mitigation of AI hallucinations. Finally, we focused on refining human-AI collaboration so our teams can offload routine tasks to AI agents and concentrate on higher-value work.

One tactic that’s been highly successful here at Microsoft Digital is conducting “Fix, Hack, Learn” weeks, where we encourage employees to identify opportunities for improving our services. So far, these initiatives have yielded multiple AI-powered breakthroughs that are already in production.

Target outcomes include

Significant efficiency gains and innovations from AI, including recognition as a leader in enterprise AI adoption.

As you advance along the AI maturity curve at your organization, keep these essential ingredients in mind:

  1. Executive sponsorship and governance
  2. Responsible AI by design
  3. Data foundations, architecture reviews, and technical readiness
  4. Talent, skills, and culture
  5. Impact tracking and accountability
  6. Change management and communication
  7. Continuous improvement, innovation, and partnerships

It’s important to remember that these elements aren’t static, but iterative. You’ll need to continue to evolve them over time as your enterprise AI transformation continues. But the five stages of enterprise AI maturity we’ve outlined in this chapter form an overarching framework to keep you moving forward.

Learning from our agent implementation experience

Invest in data infrastructure and AI platforms

Building robust data infrastructure ensures your organization is prepared to leverage AI, supporting scalable, innovative, and secure AI-driven solutions.

Foster a culture of innovation and collaboration

Champion an AI-forward culture where innovation and collaboration drive the adoption of agentic AI.

Align AI initiatives with strategic business goals

Ensuring AI initiatives align with business goals maximizes impact and positions your organization to succeed in the rapidly evolving world of agentic AI.

Implement ethical practices based on our responsible AI principles

Adopting ethical AI practices builds trust, ensures responsible innovation, and prepares your organization to navigate the evolving landscape as AI becomes central to business operations and decision-making.

Position IT to facilitate the transition to a Frontier Firm

At a minimum, your IT leaders and practitioners need to prepare your data estate for agentic workloads, partner to identify and enable prioritized business scenarios, and then actively participate in enterprise transformation through skilling, change management, and measurement activities.

Evolve your enterprise IT infrastructure to embrace dynamic and adaptive agent-based systems

Moving from traditional deterministic systems to agentic systems that introduce probabilistic behaviors, autonomous decision-making, and continuous learning requires new architectural thinking, audit capabilities, and governance models.

Key takeaways

Here are some key tips for implementing agents at your organization, based on what we’ve learned through our own experience here at Microsoft:

  • Align agent efforts with business priorities: Partner with leadership to establish clear business priorities that guide agent adoption and investment.
  • Define success and how you’ll measure it: Determine business goals and metrics of success that allow you to track impact and value over time.
  • Put the right governance structures in place: Establish steering committees across implementation, data, responsible AI, and continuous improvement to guide decision-making.
  • Start with early adopters and focused pilots: Identify enthusiastic users and promising pilot programs to validate value and refine your approach.
  • Scale what works across the enterprise: Determine which initiatives deliver the greatest value and are ready for broader, enterprise-wide adoption.
  • Support change through targeted skilling and enablement: Develop skilling and change management strategies that address the needs of both technical and nontechnical employees.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 3: Driving adoption to capture value across the organization

Readying our workforce for the agentic future through targeted enablement, skilling, and cross-company collaboration

Change management is an important part of our AI maturity journey. All the technical readiness in the world means nothing if we don’t build a transformative culture. The spectrum of agents, use cases, and creation methods is wide, but enabling them all requires one thing: an AI-first mindset.

“An important part of agentic adoption is telling stories to help people understand where AI’s value comes alive or why they should build agents. Examples from peers and real-world use cases are two of our most effective methods for getting people into the AI-first mindset.”

Driving adoption for agents represents a fundamental shift from an AI assistant like Microsoft 365 Copilot, which delivers a comparable experience for every employee. With the agentic mindset, the point is for individuals to be selective about the agents they choose to use—and more significantly, the agents they choose to create.

We also structure our enablement efforts to channel employees into different behaviors based on what’s available and what they might need to build:

  • First, we enable employees to discover and use agents that are already published and available.
  • If an agent that serves their use case doesn’t exist, employees can build their own, starting with simple no-code agents.
  • For complex agents, we channel employees, teams, and lines of business into using Copilot Studio and other, more full-featured pro-code tools.

Regardless of the behavior we’re trying to enable, we follow a four-phase strategy that takes inspiration from Prosci’s ADKAR model, which progresses through awareness, desire, knowledge, ability, and reinforcement. Our adoption efforts align with the Microsoft Engagement Framework, which we’ve developed specially for driving adoption of our products. You can learn more about our overarching approach in our Microsoft 365 Copilot readiness guide.

“An important part of agentic adoption is telling stories to help people understand where AI’s value comes alive or why they should build agents,” says Amy Rosenkranz, a principal product manager on the Copilot Extensibility team within Microsoft Digital. “Examples from peers and real-world use cases are two of our most effective methods for getting people into the AI-first mindset.”

We’re applying several tried-and-tested change management techniques to our organization-wide adoption efforts. These are relevant to both non-developer employees who want to create simple agents and professional developers working on tools for their teams, lines of business, and the entire enterprise.

Cohort-based coordination

We divide our adoption campaigns along two pivots: Internal organizations like legal or sales and marketing, and regions like North America or Europe. Different cohorts have different focuses, but the strategy is similar. Our company-wide adoption leads spearhead our efforts, and we identify members of target cohorts who can support the adoption, including change managers, leadership sponsors, and employee champions.

Adoption communications

We treat internal communications as a primary driver of agent adoption and creation, not just a distribution channel for training. Our initial communications focused on building confidence, reducing fear, and reinforcing clear norms for responsible agent building. We used consistent messaging across leadership communications, learning content, and employee channels to normalize experimentation and help employees understand when to create an agent, when to reuse one, and where to go for guidance.

AI Agent Launchpad

During our deployment of Microsoft 365 Copilot, we experimented with event-driven skilling in the form of Camp Copilot and Copilot Expo. Now, we’ve adapted these kinds of skilling events to agents as well. AI Agent Launchpad takes employees on a learning path through five modules to help them discover, use, and build agents confidently:

  1. AI mindset in motion: Employees learn about the concept of the Frontier Firm.
  2. Introduction to agents: This module covers the basic principles and definitions of AI agents to establish a foundation of understanding for agent creation and usage.
  3. Explore existing agents: Participants build the new habit of discovering available agents to see if any existing tools meet their needs.
  4. Build agents with ease: Employees polish their agent building skills in Copilot Chat and SharePoint with an expert in a hands-on lab environment.
  5. Build with Copilot Studio: This module goes deeper into designing, connecting, testing, and publishing more powerful agents.

Each module features self-learning readiness, live sessions, gamification, and Credly badges. Instead of a global, centralized event, we’ve modularized the experience so local or organization-level leaders can adapt it to their particular cohort’s needs, while still providing support from centralized adoption leads. We’ve also created a freely available resource organizations can use to plan and run their own virtual skilling events around AI adoption.

Copilot builder champs

Our initial AI rollout showed us first-hand the power of peer leadership in driving adoption, so we adapted the strategy behind our highly successful Copilot Champs Community into our Copilot builder champs program. This initiative makes use of peer connections, success stories, and a Viva Engage community, and we refocused it on enabling employees to create the agentic solutions they need.

These champions represent some of our strongest adoption evangelists on their respective teams. We also created a Microsoft SharePoint hub with resources, best practices, agent publishing information, and more.

Integration and incentivization

We collaborate with managers to integrate AI into their teams’ routines. Often, we’ll use mini-challenges or gamification strategies to encourage agent usage. We recognize top contributors with shout-outs or small awards. We’ve also found that it makes these efforts more engaging to blend work tasks with personal interests.

Formalizing change management for professional developers

We apply more focused adoption initiatives for the professional developers who create team, line-of-business, and enterprise agents. Because their efforts are reimagining how work gets done across the organization, we need to ensure these agents are aligned with business goals, built securely and responsibly, and drive the impact the company needs. The process unfolds across five steps.

1. Driving product adoption

This step echoes our broader adoption initiatives. We cultivate leadership alignment and sponsorship, comprehensive communication plans, training and upskilling programs, champion-led peer support, and integration into daily work with incentives.

2. Agent ideation and development

Here, we capture high-value use cases by mapping out processes and pain points we could improve with agents. Then we prioritize and select pilots and empower small interdisciplinary teams to build, test, and refine those agents.

3. Agent discovery and advocacy

Once we’ve completed our pilot programs, we identify the agents with the most potential impact, broaden their development, establish a catalog for observability and discoverability, and showcase success stories.

4. Workforce transformation

At this point, we’re ready to map workflows for human-AI optimization, capture scenarios that are especially useful for key roles, commit to wider AI skills training, develop our workforce into “agent bosses,” and work to measure and communicate impact.

5. Feedback and listening

Tracking the impact of your efforts is crucial. We established a feedback loop to drive further success through telemetry and analytics, employee feedback, and insights from our support channels and FAQs. Then we analyze and triage those insights and close the loop with users by communicating how their feedback drives change.

Whatever your goals and whichever segment of your workforce you target, it’s important to understand that adoption doesn’t happen by accident. True workforce transformation won’t take place without appropriate adoption activities.

As you launch your own adoption initiatives, consider who your audience is, what they need to build confidence and competence, and how you can unlock agentic value for them across your organization.

Learning from our agent adoption experience

Be thoughtful about your audience

Vary your efforts between non-developer and developer audiences, different geographies and internal organizations, and specific goals. Put together a methodology for thinking about what agents you want and what benefits they’ll provide, then determine who the best builder is.

Don’t just enable agents—empower the enterprise

Your goal isn’t just to activate agents for agents’ sake. Think carefully about what workflows and value you’re trying to unlock, and how agents can get you there. Break down aspects of roles and workflows, and see how agents fit in.

Establish multiple vectors for skilling

Different modalities work for different employees. Use every tool at your disposal, from live events to peer leadership to self-guided learning, and communicate them across all available channels.

In many ways, this is a reset

Your employees may have just become comfortable with Copilot, and agents might feel like a whole new horizon. That’s true. Have patience and understand that this is an entirely separate adoption path.

Showcase and celebrate success

People need to see value and possibilities for agents in their own work. When pilots or personal agents create results, socialize them widely and encourage employees to try them out. Nothing encourages experimentation with agents like successful usage.

Leadership sponsorship is absolutely crucial

Leaders both set expectations and bear the standard of your organization’s culture. They can be the figureheads of transformation by setting priorities, participating in communications, and leading by example.

Key takeaways

Here are some important steps to keep in mind as you embark on your own adoption and change management efforts for agents:

  • Establish strong adoption leadership early: Assign a dedicated adoption lead, form a cross-functional adoption team, and align change managers, executive sponsors, and employee champions around clear ownership and cadence.
  • Design adoption around real work and real people: Identify priority cohorts, personas, and usage scenarios, then tailor messaging, enablement, and communications to how each group works and learns.
  • Define success before you deploy: Set clear KPIs and success criteria likefeature usage, scenario adoption, and employee sentiment, and put a measurement and feedback plan in place from day one.
  • Enable employees through structured onboarding and learning: Combine readiness communications, live learning, self-service resources, and a centralized enablement asset library to help employees build confidence and momentum.
  • Activate champions and leadership to amplify adoption: Launch champion communities, empower leaders to model usage, and use internal channels to reinforce behaviors and share progress.
  • Continuously listen, learn, and iterate: Gather feedback through surveys and listening sessions, surface success stories, and apply insights to refine adoption, reinforcement, and resistance management plans.
  • Extend and optimize for professional developer teams: Support advanced agent ideation, development, discovery, and advocacy while using ongoing feedback to drive workforce transformation at scale.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 4: Providing support at the agentic frontier

Bolstering agentic transformation through solid groundwork, human oversight, and AI-driven support

With many forms of technology, support is fairly simple. You identify pain points and common issues with a relatively static technology, create self-service tools to help users with those challenges, and make subject matter experts available in the form of a dedicated support team.

But AI is evolving too quickly for that model, and agents are too diverse and individualized for a static approach. As a result, our support apparatus for agents needs to be much more flexible. Within Microsoft Digital, our goal is to make it easy for employees to engage with agentic tools freely and adaptably while maintaining safety and responsibility.

The path to this objective relies on a three-pronged approach to governance:

  • Embedded governance functionality: The ideal state is that our agent creation and publishing tools should incorporate good guidance, governance, and guardrails out of the box so the agents people create are essentially self-governing.
  • IT oversight: This is a new space and a new way of working, so it isn’t feasible for all agents to self-govern at this point. As an IT organization, Microsoft Digital fills gaps in governance through reviews and oversight. We do this by establishing risk-based policies around types of agents, exposure and sharing, and other pivots we addressed in our governance chapter.
  • User education: It’s almost impossible to predict every governance gap and need, so educating our users helps them avoid accidentally stepping out of bounds. Our Agents at Microsoft team and change managers are the linchpins of these efforts, and employees can lean on resources like Microsoft Learn courses and the Agent Builders SharePoint hub.

Of course, we do have a support team of AI subject matter experts available to employees for any questions they can’t answer themselves. Our HelpDesk support team operates independently from other enablement vehicles, but human support representatives can only accomplish so much. It’s important not to create bottlenecks by relying on conventional support. After all, the promise of AI is to reduce the burden on humans, and that’s no different for our support teams.

A photo of Sydorchuk.

“On our journey to Frontier Firm, we’re working really hard to accelerate processes and remove roadblocks so people can get to value much faster. This is crucial for agentic scenarios because we’re using these iterations to polish and improve the tools we create.”

AI itself is becoming a cornerstone solution for this challenge. An AI-driven approach aligns with the idea of the Frontier Firm, where humans lead and agents operate, in this case by supporting other humans as they explore AI more deeply.

This is a relatively new approach, but we’re already using agents to provide support in several ways:

  • We operate an agent called Ask MICA (Microsoft Intelligent Compliance Agent). This tool provides information and support for compliance issues.
  • Agents help us evaluate the risk profiles of other agents. Automating risk assessment accelerates publishing by minimizing human reviews or questions to support specialists.
  • We use an agent to perform checks against standards for responsible AI, security, privacy, and access to sensitive information.
  • We’re also partnering with our product groups to develop automated agent-building enablers and accelerators that can support ideation and evaluation for new ideas instead of relying on groups like the AI CoE to step in for that kind of support.

In reimagining the support experience this way, we’re focused on maximizing efficiency so that humans remain in the loop, but only for edge cases where AI can’t help. That’s the best use of their time and unique human talent. Meanwhile, we’re continuing to develop and implement agents to support employees for increasing numbers of non-edge cases.

Continuous improvement practices help propel this work forward. Much of that work comes from targeted conversations around pain points. For example, an agent builder might share that it’s taking too long to get security reviews for their projects. To us, that signifies that a security review agent may be useful.

“On our journey to Frontier Firm, we’re working really hard to accelerate processes and remove roadblocks so people can get to value much faster,” says Mykhailo Sydorchuk, a Customer Zero lead for Microsoft 365 integrated experiences at Microsoft Digital. “This is crucial for agentic scenarios because we’re using these iterations to polish and improve the tools we create.”

It’s important to remember that humans will always need to be involved in supporting other humans. But the more assistance agents can provide your support specialists, the more they can focus on tasks that absolutely require human attention. As you consider where AI might fit into your support efforts, our journey can shed some light on the possibilities agents represent.

Learning from our experience with providing support around agents

Emphasize proven agents to minimize the need for support

If you’ve built dedicated first-party agents within your organization, encourage employees to favor those through internal communications. They’re less likely to require support in the first place.

Identify opportunities for AI-driven support

Listen to employees’ pain points and concerns. Recurring themes and issues probably mean there’s an opportunity for agentic support.

Meld adoption and support

Education and skilling initiatives build employee competency to minimize their need for support. If people understand standard use cases thoroughly or know where they can find the right information, they’re more likely to reach out to support specialists only on real edge cases.

Backstop support as much as possible

Microsoft is working to make our tools as self-service as possible. Where gaps appear for your organization’s specific use cases, fill those with IT backstops and employee enablement resources. Hopefully, your support team can be your final resort.

Key takeaways

Here are some key things to remember as you develop your support plan for agents at your company:

  • Build agent expertise within support teams early: Provide targeted training, skilling, and early access so support teams can become trusted agent subject matter experts.
  • Reduce support demand through proactive enablement: Identify IT backstops and employee enablement opportunities that prevent common issues before they require support intervention.
  • Operationalize agentic support at scale: Identify recurring issues across non-developers and professional developers, select high-value opportunities for agentic support, build and test support agents, and actively promote them to drive adoption.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 5: Tracking the impact of your agents

Building the apparatus for effective measurement to ensure our agentic ecosystem drives business value

Effective governance, implementation, adoption, and support don’t mean anything if your agents aren’t driving the impact your organization wants. But how do you understand that impact if you can’t track and measure it? And what should your measurement criteria be?

Within Microsoft Digital and the company’s leadership team, we’re currently thinking through these ideas to ensure we’re capturing all the value agents have to offer. We’re still developing our approach, but the questions we’ve asked and our measurement parameters will be helpful to consider as you track your own agents’ impact.

First, there’s a difference between tracking agent volume, agent usage, and agent value. Employees creating massive numbers of agents that never get used don’t drive impact. Agent usage is closer to the mark, and it can be a good indicator of which tools are meaningful to employees or might deserve potential promotion for use throughout your organization. Still, usage doesn’t necessarily correlate to business value.

To really articulate value, you need to dive into the specifics of what you intend your agents to do. There are several dimensions to consider:

  • Types of agents: First-party enterprise agents, third-party agents, line-of-business or team-based tools and individually created agents all have different purposes and capabilities. They need different measurement strategies.
  • Personas: Who is creating the agent, and what are their maturity and needs? What value does a user get compared with a developer or administrator? There’s also team versus individual value. For teams, we tend to measure impact in terms of workflows automated or pain points relieved. For individual users, it’s all about satisfaction, productivity, quality, and efficiency gains.
  • Data: Different agents access varying degrees of data. How do you assess the ways they provide access and deliver insights?
  • Creation versus discovery and usage: We want to encourage both agent creation when it meets a unique need and agent discovery when a useful agent already exists. Each requires its own measurement parameters.

Our roadmap to agentic impact tracking

We aren’t starting from scratch when it comes to tracking agentic impact. Our Continuous Improvement CoE has already done extensive work aligning targeted and sanctioned AI initiatives with greater business value and tracking them over time. The concept is based on defining top-level value, cascading that value into operational drivers that deliver results, creating action plans and delivering AI solutions to achieve those goals, and then tracking them over time.

We’re currently progressing along a roadmap to a more holistic impact tracking methodology we can use to identify, consolidate, and build agent analytics for all makers, developers, administrators, and Microsoft Digital teams. As time goes on, this approach will accelerate product improvements, improve the builder experience, and cater to reporting and analysis requirements.

Our journey has three main goals:

  1. Authoritative, clean, deduplicated data
  2. A baseline for creation and usage, and well-defined key performance indicator (KPI) targets
  3. Advanced insights to accelerate the agentic ecosystem at Microsoft

In service of these goals, we’re progressing through a five-phase process:

Our five steps for setting up our agent analytics: Set requirements, partner with product teams, establish methodologies, set KPIs, and report and analyze findings.
We’re currently in phases three and four of our five-phase plan for holistic agentic analytics methodology.

As this methodological structure for tracking agentic impact has come together, we’ve used various tools to help us gain visibility. These include Viva Insights, Microsoft 365 admin center, and an internally built declarative agent tracker, with visibility typically provided by Microsoft Power BI. With the release of Microsoft Agent 365, now available through the Frontier program, we’ve gained a more streamlined vehicle for observability and telemetry.

Three feature sets will be especially useful for tracking value:

  • Registry provides a complete view of agents to give us maximum visibility and trackability across our entire agentic ecosystem.
  • Visualization includes measurement features to track agent performance, speed, and quality so we can assess ROI and make informed deployment decisions.
  • Interoperability ensures we can connect to an open ecosystem of both Microsoft and partner tools.

As Customer Zero for Agent 365, we’re excited to have a platform for observability and telemetry that encompasses everything from agentic creation through usage.

We plan to use the following capabilities to improve the overall ecosystem:

  • Filtering our agent inventory on specific criteria like the type of agent or how it was built
  • Enhancing governance-specific actions we can take with agents in areas like ownership and quarantining
  • Gaining visibility into trends like agent usage
  • Ingesting agent blueprints and defining policy templates

We’re still in the midst of our agentic measurement journey at Microsoft, but the blueprint for tracking already exists. Your organization may be in the early stages of agent readiness and deployment. If that’s the case, it will be helpful for you to internalize the lessons we’ve learned as Customer Zero and apply them as early as possible in your own journey to AI maturity.

Learning from our approach to tracking agentic impact

Think proactively, not retroactively

If you put effort into tracking agentic impact early in your AI maturity journey, you’ll be poised to start capturing insights immediately instead of applying your methodology after the fact.

Involve a wide array of stakeholders

This workstream needs oversight from different kinds of stakeholders, including your leadership team, IT, Microsoft 365 administrators, agent developers and builds, and employee champions. That will provide the sponsorship, expertise, and perspective you need for success.

Establish a continuum of value

Agents need to tie into real business goals, so it’s important to establish metrics that actually speak to those objectives. Cascade business goals to concrete KPIs with well-defined timelines and track those diligently.

Embrace the red

Try to think of underperformance not as failure, but as data. Performance data over time helps you course correct or pivot, making sure you invest where it matters.

Key takeaways

Here are some tips as you develop a strategy for measuring the impact of agents at your organization:

  • Assemble a cross-functional analytics and adoption team: Bring leadership, IT, Microsoft 365 administrators, agent builders, and employee champions together to ensure shared ownership and accountability.
  • Clarify analytics and insight requirements up front: Identify, source, and clearly articulate the data and insights needed to measure agent adoption and impact.
  • Build an analytics foundation and iterate over time: Consolidate data sources, establish baselines, and develop initial analytics that can evolve as usage grows.
  • Define and standardize agent KPIs: Finalize a clear, consistent set of metrics aligned to business outcomes and adoption goals.
  • Turn insights into action through reporting: Apply analytics and reporting to inform decisions, optimize adoption efforts, and drive continuous improvement.

Learn more

How we did it at Microsoft

Further guidance for you

Applying lessons from our agent deployment at your organization

You’ve learned from our AI maturity journey. It’s time to get started on yours.

Becoming a Frontier Firm might seem daunting. But the agent-building and agent-adoption practices we’ve articulated in this guide can help you gradually and thoughtfully progress toward a new organizational blueprint, one that blends machine intelligence with human judgment. It can help you build systems that are AI-operated but human-led.

By capitalizing on the lessons we’ve learned during our internal deployment, you can both speed up the process of building and deploying agents at your company while avoiding frustrating pitfalls. If you anchor your work in careful planning and use the steps and resources we’ve provided here, you’ll be on the path toward true business transformation through agentic workflows.

A photo of Alaparthi.

“Embracing AI transformation is an opportunity for IT leaders to take part in defining the future of their organizations. Our role as technical professionals has never been more revolutionary, and our team can support yours as you reimagine workflows to make AI part of your everyday reality.”

You’re not in this alone. If you’re looking for support or knowledge on any aspect of your deployment, reach out to our customer success team.

“Embracing AI transformation is an opportunity for IT leaders to take part in defining the future of their organizations,” says Vijaya Alaparthi, a principal group product manager at Microsoft Digital. “Our role as technical professionals has never been more revolutionary, and our team can support yours as you reimagine workflows to make AI part of your everyday reality.”

Frontier opportunities are present across every aspect of your organization today. Partner with us and take your first steps toward this exciting agentic future.

Key takeaways

This guide captures what we’ve learned as we’ve deployed agents across our entire global organization. Here are the key things to remember as your company moves from early AI adoption to a large and thriving agentic ecosystem:

  • Advance governance early: Establish a strong and trusted data foundation that includes labeling, protections, and a risk-based governance model before enabling broad agent creation. Establishing your governance foundations for Microsoft 365 provides the confidence to open up Copilot without hiding data. Clear guardrails, differentiated oversight, and lifecycle management help ensure safe innovation without sprawl.
  • Follow a maturity roadmap: Use an escalating AI maturity model that progresses from awareness to enterprise-wide adoption and agentic transformation to sequence your rollout. This staged approach aligns AI investments with business goals while building the culture, skills, and infrastructure you need to scale.
  • Drive targeted adoption: Treat agent adoption as its own transformation journey, distinct from assistant-based tools like Microsoft 365 Copilot. Cohort-driven skilling, champion communities, localized learning, and leader-led communications accelerate confidence and empower both makers and users.
  • Empower builders at all levels: Support no-code creators and professional developers with tailored enablement, clear publishing workflows, and accessible resources. This ensures individuals can create personal agents while teams can safely build enterprise-grade tools that unlock high-value scenarios.
  • Reimagine support with AI: Blend embedded governance, flexible IT backstops, and AI-driven support agents to reduce friction and scale help resources. As employees experiment with agents, automated checks, accelerators, and intelligent support tools keep humans focused on true edge cases.
  • Track impact holistically: Distinguish between agent creation, usage, and value by establishing KPIs that map directly to real business outcomes. A unified telemetry and observability layer powered by tools like Microsoft Agent 365 enables clear measurement, optimization, and proof of return on investment.
  • Continuously evolve toward becoming a Frontier Firm: Advance your culture, architecture, governance, and workforce practices iteratively as agentic capabilities grow. By combining human judgment with autonomous agentic operations, your organization can unlock transformational efficiency, innovation, and scale.

Learn more

How we did it at Microsoft

Further guidance for you

Try it out

Get started with Microsoft Agent 365 at your company.

We’d like to hear from you

Want more information? Email us and include a link to this story and we’ll get back to you.

The post Becoming a Frontier Firm: A guide for deploying AI agents based on our experience at Microsoft appeared first on Inside Track Blog.

]]>
22868
Reclaiming engineering time with AI in Azure DevOps at Microsoft http://approjects.co.za/?big=insidetrack/blog/reclaiming-engineering-time-with-ai-in-azure-devops-at-microsoft/ Thu, 16 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23161 At Microsoft Digital, the company’s IT organization, we’re reimagining how engineers, product managers, and program managers work. Microsoft Azure DevOps (ADO) is our company’s end-to-end software development lifecycle (SDLC) solution for planning, coding, testing, and delivery. It combines tools for work tracking, source control, pipelines, and artifacts so teams can manage the entire SDLC in […]

The post Reclaiming engineering time with AI in Azure DevOps at Microsoft appeared first on Inside Track Blog.

]]>
At Microsoft Digital, the company’s IT organization, we’re reimagining how engineers, product managers, and program managers work.

Microsoft Azure DevOps (ADO) is our company’s end-to-end software development lifecycle (SDLC) solution for planning, coding, testing, and delivery. It combines tools for work tracking, source control, pipelines, and artifacts so teams can manage the entire SDLC in one environment.

Although ADO excels at streamlining the development process, we found that users were still spending significant time performing repetitive administrative tasks, like creating and breaking down work items, writing and managing queries for reporting, and reclaiming lost permissions.

Our Engineering Systems Platform team successfully embedded AI into ADO, resulting in ADO experiences that replace manual workflows and free up our IT professionals to concentrate on work that makes a real impact.

Identifying the opportunity

The Engineering Systems Platform team supports 15,000 active users across one of the largest ADO platforms at Microsoft.

A photo of Panigrahy.

“We saw the toll these processes took on users, whether they were compiling information or performing manual tasks. Even with automation, there was still an opportunity to give time back to engineers.”

Gopal Panigrahy, principal product manager, Microsoft Digital

Three years ago, the team began exploring opportunities to automate repetitive ADO tasks like creating and updating work items, navigating project data, gathering statuses, and breaking large initiatives into sprint-ready work.

While they found ways to automate some of these tasks, they discovered decision-making and information synthesis still consumed valuable time and occasionally introduced some human errors.

“We saw the toll these processes took on users, whether they were compiling information or performing manual tasks,” says Gopal Panigrahy, a principal product manager in Microsoft Digital. “Even with automation, there was still an opportunity to give time back to engineers.”

Adding AI to ADO workflows

ADO spans a vast area at Microsoft, serving a wide range of enterprise use cases and personas. What these workers have in common is heavy workloads. With this in mind, different categories of ADO users expressed the desire for AI-powered experiences that could help streamline workflows and speed up day-to-day development tasks.

As generative AI matured, our team explored whether they could embed AI technology inside ADO to act as a real-time assistant, handling administrative work and answering contextual questions using natural language.

A photo of Sahoo.

“We saw it as a win-win experiment. If we could give engineers back in ADO, they could spend it building, not managing artifacts.”

Debashis Sahoo, principal group engineering manager, Microsoft Digital

The guiding principles of the experiment were simple: Stay in context and preserve user control while aligning with existing ADO permissions and processes.

That vision led to the creation of two complementary Microsoft Copilot agents: The DevOps Assistant and the AI Work Item Assistant.

“We saw it as a win-win experiment,” says Debashis Sahoo, a principal group engineering manager in Microsoft Digital. “If we could give engineers time back in ADO, they could spend it building, not managing artifacts.”

What makes this initiative distinctive is it brings AI closer to the core ADO product and its users. It allows for secure, confidential, and context-rich ADO data to be used safely for meaningful AI-powered experiences.

DevOps Assistant offers conversational, in-context support

DevOps Assistant is a chat‑based experience present in the ADO user interface (UI). It’s activated in a side panel where users can ask natural language questions to retrieve information, check project statuses, and run common DevOps actions without navigating away from their main ADO display.

The DevOps Assistant enables cross-source discovery, which reduces context switching and discovery time and helps lower the cognitive load for engineers and product managers. By reducing the time it takes to switch contexts and search for information, the DevOps Assistant helps ADO users move faster and stay focused on product delivery.

Under the hood, the DevOps Assistant is a constellation of specialized agents, each of which is focused on a different segment of the DevOps lifecycle:

  • Work Item Agent creates, refines, and scopes work into sprint-ready backlogs
  • Knowledge Board Agent surfaces the right DevOps knowledge at the right moment
  • Permission Agent handles access and permission requests
  • Bulk Complete Agent runs repetitive, large-scale updates
  • Sprint Board Agent summarizes sprint status and provides instant, prompt‑driven insights
A photo of Gupta.

“We didn’t just build a chatbot. We built a distributed system of agents that understands the intent of the DevOps user and acts on it securely and in context.”

Apoorv Gupta, principal software engineer, Microsoft Digital

Agents are built in Copilot Studio and coordinated by Orchestrator Agent, Copilot Studio’s front door.

For example, if a user asks to create or refine work items, the Orchestrator Agent routes the request to the Work Item Agent to handle. If the question is about permissions, then it delegates the work to the Permission Agent. It does this for each task.

“We didn’t just build a chatbot,” says Apoorv Gupta, a principal software engineer in Microsoft Digital. “We built a distributed system of agents that understands the intent of DevOps user and acts on it securely and in context.”

At present, the DevOps Assistant is available across all our internal ADO environments at Microsoft. The plan is to make it available to external customers soon.

AI Work Item Assistant provides inline assistance

The AI Work Item Assistant is a real-time embedded experience within ADO work items. Powered by Microsoft Foundry, it helps users create and refine work items using context and business requirements.

The assistant works immersively, keeping users focused and within ADO as they structure work items or generate child items from the parent.

For product and program managers who start with high‑level ideas, the assistant understands intent. It can automatically suggest logical, sprint‑ready breakdowns, helping to dramatically reduce the time spent on planning, sorting, and prioritizing work items.

Screenshot showing the “Use AI to edit this item” button in the Azure DevOps UI.
The AI Work Item Assistant is just a click away in Azure DevOps work items.

Turning newfound time into innovation

The key to reclaiming time for your workforce isn’t just the introduction of new AI-driven features. It’s using the technology to enforce structure and quality at the beginning, so that everything downstream moves faster.

Panigrahy describes the practice as three reinforcing feedback loops.

The first loop is upstream quality amplification. AI agents help consistently structure work items with clear acceptance criteria and templates. The structure then feeds other tools (such as GitHub Copilot), allowing them to generate higher-quality code and more predictable outcomes—shortening the overall software development lifecycle.

The second feedback loop is acceleration of execution. In a typical sprint planning session, a team of eight engineers might:

  • Take an hour (or more) to manually break user stories into more than 100 tasks
  • Create different tasks in their own style, introducing inconsistency and ambiguity
  • Generate uneven details, then spend time clarifying data later

With DevOps Assistant and AI Work Item Assistant, that same task breakdown turns into a prompt-driven action that no longer requires hours of work.

“It burns a lot of time for everyone to manually create each item in their own way, making sure they’re using the correct inputs from the product manager and confirming they aren’t missing anything,” Panigrahy says. “Now, with AI magic, it takes less than three minutes.”

The third feedback loop is capacity reinvestment. Instead of spending hours on tactical DevOps mechanics, teams can now spend more time on engineering judgment, resulting in better estimation, technical decisions, and design. They can use these reclaimed hours to learn new tools, experiment with new agents, and innovate on the SDLC.

“Capacity saving keeps giving back, in a loop,” Gupta says. “You get more capacity back. You innovate. You learn. You do better.”

What’s next on the AI-in-ADO journey

The DevOps Assistant and the AI Work Item Assistant can help change user behavior, shifting from time spent doing tactical DevOps tasks to performing higher‑value, judgment-based work. These tools can help teams increase work quality and reduce wasted time.

“Our next chapter is about making AI smarter, more action-oriented, and truly agentic,” Sahoo says. “The goal is to reduce cognitive load and allow the experience to live wherever users are—from Azure DevOps to Microsoft Teams and Microsoft 365—so the agent works seamlessly across their workflow.”

AI-driven productivity gains are arguably the biggest opportunity in the industry. It’s fundamentally redefining the engineering experience at an unprecedented pace.

“While we’ve made huge strides embedding AI into the everyday Azure DevOps experience, it still feels like we’re just getting started,” Sahoo says. “Staying relevant means continuously evolving to deliver ever-greater value and efficiency to engineers.”

Key takeaways

Keep these tips in mind as you get started on your own journey with AI and Microsoft ADO:

  • Treat AI as a strategic accelerator, not as an add-on. Identify where your engineering process can use AI to move from simple assistance to transforming your workflows.
  • Target high-effort, high-volume tasks first. Analyze where your teams are spending significant manual time, even if AI tools are already in place in those workflows.
  • Validate productivity with measurable data, not intuition. Track time reclaimed, workflow efficiency, reduction in manual steps, and user satisfaction. Tangible data can help your initiative earn trust and justify the expansion of AI tool use on your team.

The post Reclaiming engineering time with AI in Azure DevOps at Microsoft appeared first on Inside Track Blog.

]]>
23161
Powering the technical veracity of AI at Microsoft with a Center of Excellence http://approjects.co.za/?big=insidetrack/blog/powering-the-technical-veracity-of-ai-at-microsoft-with-a-center-of-excellence/ Thu, 16 Apr 2026 14:15:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23147 When we launched our AI Center of Excellence (CoE) in 2023, we had a straightforward goal: Help our organization experiment with AI, learn quickly, and do it responsibly. Our teams across Microsoft Digital—the company’s internal IT organization—leaned in. We built tools, workflows, and AI enabled solutions at speed. Momentum followed, along with real enthusiasm and […]

The post Powering the technical veracity of AI at Microsoft with a Center of Excellence appeared first on Inside Track Blog.

]]>
When we launched our AI Center of Excellence (CoE) in 2023, we had a straightforward goal: Help our organization experiment with AI, learn quickly, and do it responsibly.

Our teams across Microsoft Digital—the company’s internal IT organization—leaned in. We built tools, workflows, and AI enabled solutions at speed. Momentum followed, along with real enthusiasm and growth.

A photo of Wu.

“We did a lot of good work building community and excitement. But at some point, we needed to evolve and put more structure around what we’d built.”

Qingsu Wu, principal group product manager, Microsoft Digital

But increasing scale required us to evolve our approach.

As adoption accelerated, we began to see duplication, uneven governance, and growing gaps between strategy and delivery. What helped us move fast early on wasn’t enough to sustain impact over time.

“We did a lot of good work building community and excitement,” says Qingsu Wu, a principal group product manager who leads the AI CoE at Microsoft Digital. “But at some point, we needed to evolve and put more structure around what we’d built.”

AI agents and solutions began appearing across Microsoft Digital. Different teams solved similar problems. Standards were interpreted differently. Reporting was inconsistent, and in many cases manual.

The question was no longer, “How do we help teams try AI?” It became, “How do we turn AI into consistent, measurable outcomes at scale?”

Answering that question required a change in how our CoE operated.

Rather than acting primarily as an advisory group, the AI CoE evolved into an execution‑focused function. Its role expanded from guidance to coordination, helping set priorities, define guardrails, and connect AI work directly to business outcomes.

The goal wasn’t to slow AI innovation down, but to help it move in the correct direction with more agility and better scalability

Evaluating AI for Microsoft

The AI CoE connects AI strategy to execution across Microsoft Digital. It operates as a cross‑functional coordination layer that sets direction and creates shared accountability for how AI work gets done.

A photo of Khetan.

“We can see patterns that a single team can’t. We’re translating AI CoE strategy and enterprise priorities into clear execution plans that work in each organization’s context. That helps us align priorities and make sure the biggest bets are actually landing.”

Ria Khetan, senior program manager, Microsoft Digital

The CoE brings our leaders and practitioners together from AI, data, responsible AI, and operations to answer questions collectively. We use that cross‑disciplinary view to operate above individual projects without losing touch with day‑to‑day reality.

The CoE looks across the organization and answers questions individual teams can’t answer on their own.

  • What AI initiatives are already in flight?
  • Which ones matter most to the business?
  • Where are teams duplicating effort?
  • Where do we need clearer standards or stronger governance?

“We can see patterns that a single team can’t,” says Ria Khetan, a senior program manager in Microsoft Digital who helps lead program management for the AI CoE. “We’re translating AI CoE strategy and enterprise priorities into clear execution plans that work in each organization’s context. That helps us align priorities and make sure the biggest bets are actually landing.”

We’ve designed the AI CoE to act as the connective tissue between leadership intent and execution on the ground. It helps ensure that AI work across Microsoft Digital moves forward with purpose, consistency, and measurable impact.

Building transformation on core pillars

The AI CoE establishes a common structure that helps our teams work toward the same outcomes, even when they are building different solutions.

A photo of Campbell.

“We use the CoE to bring consistency to how AI work gets done. It gives us a way to step back and ask whether we’re solving the right problems and whether we’re set up to scale.”

Don Campbell, principal group technical program manager, Microsoft Digital

The operating model is intentionally simple.

AI initiatives are reviewed against shared pillars that help teams think beyond individual projects. These lenses ensure the work aligns to business priorities, can scale safely, has a clear delivery path, and supports responsible adoption.

“We use the CoE to bring consistency to how AI work gets done,” says Don Campbell, a principal group technical program manager who leads AI strategy here in Microsoft Digital. “It gives us a way to step back and ask whether we’re solving the right problems and whether we’re set up to scale.”

Our CoE uses these four pillars to guide our work:

  • Strategy. We work with product and feature teams to determine what we want to achieve with AI. They define business goals and prioritize the most important implementations and investments.
  • Architecture. We enable infrastructure, data, services, security, privacy, scalability, accessibility, and interoperability for all our AI use cases.
  • Roadmap. We build and manage implementation plans for all our AI projects, including tools, technologies, responsibilities, targets, and performance measurement.
  • Culture. We foster collaboration, innovation, education, and responsible AI among our stakeholders.

These pillars are the common language that helps us connect strategy to execution and make decisions across all teams and scenarios at Microsoft Digital.

Strategy

Our CoE strategy team’s role is to step back and create clarity.

Our strategy is driven from the organization’s top level, and executive sponsorship is crucial to executing our implementation well. When our transformation mandate comes from the organization’s leader, it resonates in every corner of the organization, every piece of work, and every task. We also encourage and welcome ideas from every level of the organization, empowering individuals to contribute their AI insights.

We maintain a centralized view of AI initiatives across Microsoft Digital, including agents, workflows, and AI‑enabled solutions. That visibility allows our CoE team to identify duplication, surface opportunities to scale successful ideas, and align investments to enterprise priorities. This creates a shared intake and prioritization model.

One of our CoE strategy team’s most significant responsibilities is prioritizing the idea pipeline for AI solutions. All employees can feed ideas into the pipeline through a form that records important details. The strategy team then evaluates each idea, analyzing two primary metrics:

  • Business value. How important is the solution to our business? Potential cost reduction, market opportunity, and user impact all factor into business value. As our business value increases, so does the idea’s position in the pipeline priority queue.
  • Implementation effort. We focus on clearly defining the problem statement—what the problem is, why it matters, who the customer is, the baseline metrics, and the plan to attribute value pre‑production. This ensures we prioritize AI for the most critical business problems and can measure impact before and after deployment.

By anchoring AI work in business outcomes from the start, the strategy pillar helps ensure the organization’s energy is spent on the work that matters most.

Architecture

Our architecture pillar defines how we help teams scale AI solutions without creating security gaps, compliance issues, or technical debt they’ll have to unwind later.

“The CoE introduces a framework to enable design reviews in the early development phase. We help make sure teams are choosing the right platforms and thinking about security and compliance from the beginning.”

Qingsu Wu, principal group product manager, Microsoft Digital

Before solutions move into broader use, our architecture team helps think through data readiness, platform alignment, and governance requirements. The goal isn’t to prescribe a single architecture, but to make sure foundational decisions won’t limit scale or create risk down the line. Many times, this means doing things before development, while other times it means making improvements after the initial development is done and the product or scenario is launched and being used. We also track our efforts with measurable metrics like usage.

One common pitfall is that teams may gravitate toward the most flexible platforms with full control, without fully understanding the associated security and compliance implications. To address this, we publish clear guidance to help teams choose the right platform—one that strikes the appropriate balance between flexibility and the security and compliance effort required.

Our architecture pillar helps prevent that by reinforcing a set of common expectations. Teams still build locally and move fast, but they do so within a framework that supports reuse, interoperability, and responsible operation built on enabling teams and employees to experiment with guardrails that keep our production systems safe.

“The CoE introduces a framework to enable design reviews in the early development phase,” Wu says. “We help make sure teams are choosing the right platforms and thinking about security and compliance from the beginning.”

Teams are encouraged to build on recommended platforms and services that support enterprise‑grade security, observability, and lifecycle management. This helps ensure solutions can be monitored, governed, and supported over time.

Security and compliance are never treated as downstream checkpoints. Architectural guidance reinforces the need to design with identity, access controls, auditability, and responsible AI principles from the start.

When solutions prove valuable, we look for opportunities to reuse architectural patterns, components, or services rather than rebuilding them in isolation. This reduces duplication and accelerates future work.

Roadmap

Our CoE roadmap team examines our employee experience in the context of our AI solutions and governs how we achieve the optimal experience in and throughout AI projects. It focuses on how our employees will interact with AI. Getting the roadmap right ensures user experiences are cohesive and align with our broader employee experience goals.

We’ve recognized AI’s potential to impact how our employees get their work done.

Their experiences and satisfaction levels with AI services and tools are critical. Our roadmap pillar is designed to encourage experiences across all these services and tools that are complementary and cohesive.

We’re focusing on the open nature of AI interaction.

“We’re surfacing AI capabilities and information when the user needs them, according to their context,” Campbell says. “It makes the user experience and user interface for an AI service less important than how the service allows other applications or user interfaces to interact with it and harness its power.”

A key part of this approach is disciplined experimentation.

Rather than treating every idea as a long‑term commitment, the roadmap pillar helps teams validate value early. Our teams know when they’re in an experimental phase and when they’re expected to operationalize. This gives our leaders a more consistent view of progress and risk. The net result is that dependencies between teams surface earlier, when they’re easier to resolve.

Culture

Our culture pillar ensures that AI adoption across Microsoft Digital is intentional, responsible, and sustainable.

Culture underpins everything we do in the AI space. Ensuring our employees can increase their AI skillsets and access guidance for using AI responsibly are critical to AI at Microsoft.

“We’re driving a shift from ad‑hoc AI usage to intentional, outcome‑driven adoption,” Khetan says. “That requires clarity, education, and shared expectations.”

In practice, that means the culture pillar defines how our teams are expected to adopt AI and integrate it into their work, not just what tools they can use.

Our culture team works with AI champions across the organization to translate enterprise AI priorities into local execution. Those champions act as two‑way conduits, bringing real‑world feedback and blockers back to the CoE and carrying guidance, standards, and learnings back to their teams.

Without this structure, AI adoption tends to fragment as teams experiment in isolation.

Our culture team has published training, recommended practices, and our shared learnings on next-generation AI capabilities. We work with individual business groups at Microsoft to determine the needs of all the disciplines across the organization. That work extends to groups as diverse as engineering, facilities and real estate, human resources, legal, sales, and marketing, among others. 

Responsible AI is embedded throughout that work.

The CoE reinforces responsible AI practices as part of everyday decision‑making—during design, experimentation, and scale. Teams are expected to understand not just what they’re building, but the implications of how they build it.

In the AI CoE, culture isn’t abstract. It shows up in how teams propose ideas, how they design solutions and how they measure success.

Fostering agent innovation

The true value of the AI CoE is evident when strategy, architecture, roadmap, and culture come together around real work.

A clear example of that is how we addressed the rapid growth of AI agents across the organization.

A photo of Tiwari.

“That’s the core problem we’re trying to solve. In the past, admins had to go to multiple portals just to understand how many agents exist, and they all give different answers.”

Garima Tiwari, principal product manager, Microsoft Digital

Our teams were building agents in different platforms, for different scenarios, and at very different levels of maturity. That flexibility accelerated innovation, but it also made it difficult to answer basic questions.

  • How many agents exist today?
  • Which ones are in production?
  • Which ones touch sensitive data?

The strategy lens helped clarify what mattered most. Our goal wasn’t to inventory every experiment. It was to gain visibility into agents that were active, scaling, or depended on by others, and to ensure those agents aligned to business priorities and Responsible AI expectations.

Architecture quickly followed.

As the CoE looked at how agents were built, we quickly discovered that information about agents was fragmented across tools. Different platforms showed different numbers. Ownership wasn’t always clear. And governance signals were hard to reconcile.

“That’s the core problem we’re trying to solve,” says Garima Tiwari, a principal product manager in Microsoft Digital leading our internal strategy and adoption of Agent 365. “In the past, admins had to go to multiple portals just to understand how many agents exist, and they all give different answers.”

This is where Agent 365—which we use to govern agents here at Microsoft—became a critical enabler.

Agent 365 brings together signals from multiple agent‑building platforms into a single, consolidated view. That visibility allows the CoE and administrators to understand agent inventory, ownership, lifecycle state, and governance posture in one place.

“Agent 365 is really about accurate inventory and observability,” Garima says. “It provides one number we can trust and a way to see how agents are behaving, who they’re interacting with, and whether they’re compliant.”

That architectural clarity changed how decisions were made.

Instead of guessing what was safe to scale, the CoE could see which agents were production‑ready, which needed remediation, and which should remain in experimentation. Security, privacy, and compliance considerations moved to earlier in the lifecycle.

“We can’t scale what we don’t understand,” Wu says. “Agent 365 helps us see what’s actually running so we’re not scaling something blindly.”

The roadmap lens then brought structure to execution.

“What changed was the mindset. Teams started thinking about manageability, security, and scale much earlier, not after an agent was already deployed.”

Don Campbell, principal group technical program manager, Microsoft Digital

Rather than standardizing everything at once, the CoE helped teams sequence work. Some agents stayed in pilot. Others moved toward broader rollout, informed by architectural and governance signals surfaced through Agent 365.

Culture and enablement ran alongside that work.

Teams began factoring operational readiness into design decisions instead of treating governance as a final checkpoint. Agent 365 isn’t positioned as a control tool at the end of the process, but as part of building agents the right way from the start.

“What changed was the mindset,” Campbell says. “Teams started thinking about manageability, security, and scale much earlier, not after an agent was already deployed.”

The outcome wasn’t a single standardized solution.

It was a repeatable approach within a shared CoE framework, supported by platforms like Agent 365, that made scaling AI more visible, more manageable, and more intentional.

That’s what the AI CoE enables at Microsoft Digital.

Key takeaways

If you’re just starting to consider AI usage at your organization, or if you’re already creating a standardized approach to AI, consider the following:

  • Start with outcomes, not tools. AI work scales faster when teams align on the business problem first and select technology second.
  • Design for scale from day one. Early architectural decisions around data, security, and platforms determine whether solutions can grow—or need to be rebuilt.
  • Make experimentation disciplined. Clear paths from prototype to production help teams move fast without committing to ideas that haven’t proven value.
  • Treat governance as an enabler, not a gate. Visibility and manageability, supported by platforms like Agent 365, make it easier to scale AI responsibly.
  • Create shared accountability. Standard metrics and automated reporting turn AI activity into measurable progress.

The post Powering the technical veracity of AI at Microsoft with a Center of Excellence appeared first on Inside Track Blog.

]]>
23147
Olutunde Makinde: From Lagos to Redmond, a Microsoft IT engineer’s journey http://approjects.co.za/?big=insidetrack/blog/olutunde-makinde-from-lagos-to-redmond-a-microsoft-it-engineers-journey/ Thu, 02 Apr 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22855 A career in Microsoft Digital, the company’s internal IT organization, puts employees at the center of one of the world’s most complex and forward‑leaning enterprise environments. This is the team that runs Microsoft on Microsoft technology and services—maintaining more than a million computing devices, enabling global collaboration, and shaping the employee experience for more than […]

The post Olutunde Makinde: From Lagos to Redmond, a Microsoft IT engineer’s journey appeared first on Inside Track Blog.

]]>
A career in Microsoft Digital, the company’s internal IT organization, puts employees at the center of one of the world’s most complex and forward‑leaning enterprise environments. This is the team that runs Microsoft on Microsoft technology and services—maintaining more than a million computing devices, enabling global collaboration, and shaping the employee experience for more than 200,000 people.

To accomplish these huge tasks, it’s essential to cultivate a range of perspectives, expertise, and lived experiences.

Olutunde Makinde is an example of this.

A photo of Makinde.

“A friend once laughed at me back in college when I said I wanted to work at Microsoft, like it was impossible. But I knew I could achieve the impossible if I could just be focused. I never gave up.”

Olutunde Makinde, senior service engineer, Microsoft Digital

Makinde, a senior service engineer in Microsoft Digital, came to the company the long way around—roughly 7,000 miles away from the Redmond, Washington, headquarters, in fact. He’s originally from Lagos, Nigeria.

As a global organization, Microsoft builds teams where people with different experiences and life journeys actively influence how products, services, and internal platforms are designed. Makinde, commonly known around the office as “Tunde” (“rhymes with Sunday,” he notes), embodies that diverse approach, bringing his unique insights and experiences to critical work at the company.

“A friend once laughed at me back in college when I said I wanted to work at Microsoft, like it was impossible,” Makinde says. “But I knew I could achieve the impossible if I could just be focused. I never gave up.”

Launching an IT career in Nigeria

Makinde’s journey to Microsoft began with earning a degree in computer engineering in Lagos, after which he found work as a network engineer. He spent the next several years developing his skills through certifications and other learning opportunities.

“I did a lot of self-paced training, learning how to configure Cisco routers. Eventually I became a Cisco-certified network professional (CCNP),” Makinde says. “Around that time, I had a friend who was preparing for Windows Server 2008 certifications, and through his study materials I started learning more about Microsoft and its products.”

Makinde’s first direct encounter with Microsoft came in 2014, when the company he worked for received a contract to deploy the first Microsoft Azure cloud installation in Nigeria.  

“I spent the last day of 2014 and the first day of 2015 at the customer site, figuring out how to connect their on-premises network to Azure,” Makinde says. “It had never been done before in Nigeria, and taking up that challenge really propelled me into the world of Microsoft-specific technology.”

From there, Makinde set his sights on a career at Microsoft. He parlayed his initial exposure to cloud architecture into a focus on Azure, as well as Amazon Web Services. After spending some time in the United Kingdom, he achieved his goal when he was hired by the Microsoft Digital team in 2022. He moved to the United States in 2025.

He credits support from his family, especially his wife, with helping him achieve his dreams.

“My wife was a pillar of support through every career transition, from Nigeria to the UK to the United States,” Makinde says. “She believed in me when I faced rejections, celebrated with me when I finally got the offer, and now keeps me grounded whenever work gets intense. I couldn’t have made this journey without her.”

Making an impact from day one

Kathren Korsky, a principal technical program manager in Microsoft Digital and Makinde’s hiring manager, remembers the impression he made right away. It was clear that Makinde’s experience and technical background were major assets.

“What caught my attention was how well-prepared he was for the conversation and how well he communicated,” Korsky says. “The stories he shared about his work with Azure deployment in Nigeria really drew my interest. But I was also intrigued by how he was able to bridge technology with the business world, working with different banks across the continent to gather requirements, understand them, and build solutions.”

Upon being hired at Microsoft, he initially worked remotely from the UK on a Redmond-based device and application management team. The team was looking to deploy Cloud PC internally and needed a system in which employees could request access and get approvals to use Cloud PCs.

“He was able to stand up a full Power Automate workflow within a short period, and with a very high degree of quality,” Korsky says. “Rarely did anyone find any defects or bugs in his system.”

Makinde’s designs drove value moving forward as well, as the team made updates to his initial workflows.

A photo of Korsky

“His design was so strong that we were basically able to follow exactly what he had created in Power Platform and build that exact same design in ServiceNow. It really expedited that whole process.”

Kathren Korsky, principal technical program manager, Microsoft Digital

ServiceNow was more commonly used for systems that involved access requests and approvals, but when a platform update from Power Automate was initiated the team found Makinde’s original design was durable enough to weather the shift.

“His design was so strong that we were basically able to follow exactly what he had created in Power Platform and build that exact same design in ServiceNow,” Korsky says. “It really expedited that whole process.”

Driving efficiency and managing change

Since moving to the United States to work at company headquarters, Makinde has continued to push important projects forward—working with different stakeholders to deploy policy changes across Microsoft, managing the Change Advisory Board (CAB) intake process, and driving configuration updates for security and first-party product deployments.

“There’s a lot of diligence required to see the edge cases happening, to pay attention to them, and to watch out for potential problems. Tunde stops rollouts regularly to flag potential defects or risks, which prevents issues from interrupting our work and reducing productivity.”

Jeff Duncan, principal service engineering manager, Microsoft Digital

Makinde learned how to assess change requests and understand risk profiles, as well as enforce best practices for managing change within the security environment. Within about a year, he was able to take the lead in the space and own the deployment process.

A single misconfigured policy can cause major disruption. Makinde’s role puts him in position to be the checkpoint that prevents incidents before they happen.

“There’s a lot of diligence required to see the edge cases happening, to pay attention to them, and to watch out for potential problems,” says Jeff Duncan, principal service engineering manager in Microsoft Digital and Makinde’s manager. “Tunde stops rollouts regularly to flag potential defects or risks, which prevents issues from interrupting our work and reducing productivity.”

Softer skills like transparency, collaboration, and clear communication across levels and teams are key aspects of Makinde’s work as well.

“Tunde is thoughtful and detail-oriented, and he’s very good at explaining the decision-making process when he provides overviews for leadership,” Duncan says. “There’s rational, logical reasoning behind the decisions he makes.”

Makinde has implemented new efficiencies in how he manages the CAB and deployment service using AI. This includes CABBIE—an AI-powered agent that automates CAB communications. For Intune deployments, he uses AI to streamline deployment coordination and package reviews. These innovations reflect our Customer Zero approach to AI adoption here in Microsoft Digital.

“We run weekly CAB meetings to review change requests. That comes with a lot of communication work — status updates, follow-ups, coordination with stakeholders. It was all manual,” Makinde says. “CABBIE pulls the data from Azure DevOps, generates the emails, updates requests, and logs approvals automatically. It saves time and reduces errors.”

Success at Microsoft Digital: Aptitude and curiosity

As the organization at the center of the company’s own digital transformation, we in Microsoft Digital function as a living showcase of what’s possible with Microsoft technology. Our team tests new capabilities at enterprise scale as Customer Zero for Microsoft, identifying gaps and providing insights to ensure our customers benefit from what we’ve learned.

Because the impact of Microsoft Digital extends far beyond internal systems, team members have to set the standard for digital excellence. They must demonstrate what enterprise transformation looks like in practice and empower customers with the confidence to pursue their own modernization journeys.

 Hiring talented people like Makinde is essential to this mission.

“There are three core traits I look for when hiring—aptitude, attitude, and curiosity,” Korsky says. “Aptitude is not only what you currently know, but your propensity and desire to learn and grow those skills. Attitude goes hand in hand with that—are you willing to demonstrate grit and perseverance? And then curiosity, because so much of what we do from an innovation perspective requires a willingness to challenge assumptions and think of completely new ways of doing things.”

Makinde’s journey here at Microsoft Digital embodies and illustrates the company’s larger story: how technical expertise, innovative thinking, and a commitment to continuous learning combine to deliver world-class results.

“I’m now up to 25 certifications, and I continue to learn how to do more at Microsoft to positively impact the organization and protect our employees’ experience across applications and devices.”

Olutunde Makinde, senior service engineer, Microsoft Digital

That attitude of persistent curiosity and the willingness to keep learning continue to fuel Makinde’s experience at Microsoft. 

“Self-improvement is a way of life for me that has driven my career forward,” Makinde says. “At an early stage in my career, I did a lot of self-training—from learning how to configure Cisco routers and switches, to migrating on-premises workloads to Azure and managing cloud resources. I’m now up to 25 certifications, and I continue to learn how to do more at Microsoft to positively impact the organization and protect our employees’ experience across applications and devices.”

Key takeaways

Olutunde Makinde’s career experience here in Microsoft Digital offers some important insights that you can apply to your own organizational development:

  • AI adoption starts with practical problems. Makinde’s use of AI to streamline CAB communications and deployment coordination shows how Customer Zero teams find real-world applications for emerging technology.
  • Different experiences and perspectives contribute to business success. Achieving ambitious goals as an organization is dependent upon attracting talented people like Makinde from a range of backgrounds, disciplines, and lived experiences.
  • Strong technical skills paired with innovative thinking drives value. Makinde’s contributions to flexible cloud deployment workflows are an example of how this combination pays dividends.
  • Proactive risk management and attention to detail can prevent large-scale disruptions. By being willing to stop rollouts and flag risks before they become problems, Makinde’s approach to his work exemplifies how thoughtful decision-making safeguards productivity and security.
  • Persistence, curiosity, and continuous learning are critical career accelerators. Having a long and successful career at a company like Microsoft goes beyond just technical aptitude; it also requires perseverance and a passion for learning. Makinde’s self-driven training efforts and his refusal to give up have enabled him to achieve what once seemed impossible.

The post Olutunde Makinde: From Lagos to Redmond, a Microsoft IT engineer’s journey appeared first on Inside Track Blog.

]]>
22855
Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft http://approjects.co.za/?big=insidetrack/blog/responsible-ai-why-it-matters-and-how-were-infusing-it-into-our-internal-ai-projects-at-microsoft/ Thu, 26 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=19289 Like the computer itself and electricity before it, AI is a transformational technology. It’s providing never-before-seen opportunities to reimagine productivity, address major social challenges, and democratize access to technology and knowledge. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic […]

The post Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft appeared first on Inside Track Blog.

]]>
Like the computer itself and electricity before it, AI is a transformational technology. It’s providing never-before-seen opportunities to reimagine productivity, address major social challenges, and democratize access to technology and knowledge.

As AI reshapes how we work and live, it brings with it both transformative potential and complex challenges. Across the industry, concerns about bias, safety, and transparency are growing.

At Microsoft, we believe that realizing AI’s benefits requires a shared commitment to responsibility—one we take seriously. As a result, we aren’t just creating AI solutions. We’re taking the lead on infusing responsible AI principles into our technology and organizational practices.

Prioritizing responsible AI across Microsoft

The most impressive AI-powered capabilities in the world mean nothing if people don’t trust the technology. Microsoft and many of our customers across all industries are working to strike the right balance between innovation and responsibility.

“We’re on a multi-year journey born out of the need to support innovation—and do it in a way that builds trust. Along the way, we’ve continued to iterate and evolve the program through a series of building blocks.”

Mike Jackson, head of AI Governance, Enablement, and Legal, Microsoft Office of Responsible AI

IT leaders and CXOs aren’t just deploying AI tools. They’re also thinking of the right guardrails to implement around those tools as their organizations mature. Meanwhile, developers and deployers want to be sure they’re building and implementing AI solutions within the bounds of responsibility.

As an organization that’s mapping the frontier of AI while creating business-ready tools for our customers, Microsoft is shaping the global conversation on responsible AI. We don’t only accomplish that through policy and governance, but also by embedding responsibility into the ways we build, deploy, and scale AI.

Laying the foundation for this work is the duty of our Office of Responsible AI (ORA). This team brings policy and governance expertise to the responsible AI ecosystem at Microsoft.

“We’re on a multi-year journey born out of the need to support innovation—and do it in a way that builds trust,” says Mike Jackson, head of AI Governance, Enablement, and Legal for the Office of Responsible AI. “Along the way, we’ve continued to iterate and evolve the program through a series of building blocks.”

ORA advances AI development, deployment, and secure and trustworthy innovation through governance, legal expertise, internal practice, public policy, and guidance on sensitive uses and emerging technology. The team focuses on empowering innovation while ensuring it falls within Microsoft’s governance, compliance, and policy guardrails.

ORA also partners closely with product and engineering teams as well as other trust domains like privacy, digital safety, security, and accessibility. The team created our Microsoft Responsible AI Standard, the cornerstone of our governance framework, and ensures internal AI initiatives align with it.

The Responsible AI Standard translates our six principles into actionable requirements for every AI project across Microsoft:

Fairness

AI systems should treat all people equitably. They should allocate opportunities, resources, and information in ways that are fair to the humans who use them.

Privacy and security

AI systems should be secure and respect privacy by design.

Reliability and safety

AI systems should perform reliably and safely, functioning well for people across different use conditions and contexts, including ones they weren’t originally intended for.

Inclusiveness

AI systems should empower and engage everyone, regardless of their background, striving to be inclusive of people of all abilities.

Transparency

AI systems should ensure people correctly understand their capabilities.

Accountability

People should be accountable for AI systems with oversight in place so humans can maintain accountability and remain in control.

ORA reports into the Microsoft Board of Directors and collaborates with stakeholders and teams across the company to operationalize these principles, implementing policies and practices that apply to AI applications. They determined that every AI initiative should undergo an impact assessment to ensure it aligns with the standard.

If ORA is our compass for responsible AI, our companywide Responsible AI Council has its hands on the steering wheel.

The council, led by Chief Technology Officer Kevin Scott and Vice Chair and President Brad Smith, was formed at the senior leadership level as a forum and source of representation across research, policy, and engineering. It provides leadership, strategic guidance, and executive support and sponsorship to advance strategic objectives around innovation and responsible AI.

A photo of Tripathi.

“ORA has established clear principles and a step-by-step assessment framework and tool. Our responsibility is to rigorously follow this process and ensure compliance across our products and initiatives.”

Naval Tripathi, principal engineering manager and co-lead, Microsoft Digital Responsible AI team

Under the council’s guidance, responsible AI CVPs, division leaders, and a network of responsible AI champions across the company operationalize the implementation of our Responsible AI Standard and compliance with our policies.

The structure of these teams is straightforward.

Every division has a designated CVP and division lead to steer the work and connect their team to the overarching Responsible AI Council. Within those divisions, each organization has a lead responsible AI champion or a set of co-leads to steer their team of champions. Those champions act as subject matter experts, reviewers for the impact assessment process, and points of contact for the teams developing AI initiatives.

Implementing AI governance within Microsoft IT

As members of the company’s IT organization, Microsoft Digital’s responsible AI division lead and champion team have a special role to play. They helped develop a critical internal workflow tool, which has now become a mandatory part of our responsible AI assessment process.

“The key is to ensure full alignment of responsible AI practices with ORA,” says Naval Tripathi, principal engineering manager and co-lead for Microsoft Digital’s Responsible AI Team. “ORA has established clear principles and a step-by-step assessment framework and tool. Our responsibility is to rigorously follow this process and ensure compliance across our products and initiatives.”

This tool logs every project, guides AI developers through initial impact assessments all the way to final reviews, and facilitates those workflows for champions.

A photo of Po.

“As organizations develop a diverse ecosystem of AI agents, often created by multiple engineering teams, it becomes essential to establish a standardized evaluation process. This ensures every agent adheres to enterprise-level standards before we deploy and distribute it to end users.”

Thomas Po, senior product manager, Microsoft Digital

By streamlining the process through a unified portal, the tool increases efficiency and minimizes errors that can arise from manual processes. It also encourages teams to make responsible AI part of the software development lifecycle (SDL) itself, not a hurdle or an afterthought.

“As organizations develop a diverse ecosystem of AI agents, often created by multiple engineering teams, it becomes essential to establish a standardized evaluation process,” says Thomas Po, a senior product manager working on Campus Services agents. “This ensures every agent adheres to enterprise-level standards before we deploy and distribute it to end users. That makes it more manageable in the long term, and having it all in one tool gives us more transparency.”

Our unified internal workflow looks like this:

  • Project initiation and system registration: During the design phase for an AI initiative, the engineering team accesses the portal and registers a new AI system. From there, they fill out fields with crucial information, including a title, description, the developer team’s division, whether the project will include internal or external resources, the relevant champion who should review their initiative, and other details. Within this initial form, different scenarios will trigger different review parameters and requirements, for example, when a team intends to publish a tool externally or engage with sensitive use cases.
  • Release assessment: After the system registration is complete, the team initiates the release assessment, a much more thorough review designed to ensure the AI-powered solution is ready to go live. At this point, the engineering team needs to provide detailed documentation. That includes the volume and kinds of data the system will use, potential harms and mitigations, and more. A release assessment includes experts in our Office of Responsible AI, Security, Privacy, and other teams, who review sensitive use cases or initiatives that include generative AI.

If the project clears all the requirements and reviews, it’s ready to go live. Crucially, we don’t think of these stages as a set of hurdles teams need to clear to complete their projects. Instead, the process guides engineering teams through the design elements they need to consider and provides opportunities for feedback from subject matter experts.

“The tool captures all the requirements from ORA and incorporates them into a developer-friendly workflow,” says Padmanabha Reddy Madhu, principal software engineer and responsible AI champion for Employee Productivity Engineering in Microsoft Digital. “It’s also a great way to pull AI champions into the design phase so we can support our colleagues’ work.”

With more than 80 AI projects currently underway across Microsoft Digital, logging and streamlining are essential. Teams are working on all kinds of ways to boost enterprise processes and employee experiences, like the following examples from Campus Services that users can access through our Employee Self-Service Agent:

  • A facilities agent helps employees take action when they discover an issue at one of our buildings, like a burnt-out light, a spill, or physical damage. The agent creates a ticket to alert a Facilities team so they can resolve it and allows the submitter to follow up on progress.
  • A campus event agent makes onsite gatherings like talks and Microsoft Garage build-a-thons more discoverable through simple queries. Using this agent, employees can more easily discover and plan around events that interest them, adding value to the in-person experience and incentivizing community.
  • A dining agent addresses the challenges of multiple on-campus restaurants featuring menu options that shift daily. Employees can use natural language queries like “Where can I get teriyaki today?” The agent does the rest. This kind of agent can be especially helpful for employees with allergies or dietary restrictions, providing a boost to accessibility for the on-campus dining experience.
A photo of Wu.

“AI is rapidly becoming a standard part of how we build and operate. As adoption accelerates, Responsible AI becomes imperative and enables teams to innovate at speed while maintaining safety and accountability at scale.”

Qingsu Wu, principal group product manager, Microsoft Digital

Our policies and practices have embedded a culture of responsibility and trust into our internal AI development processes. With that trust comes the confidence to experiment.

“AI is rapidly becoming a standard part of how we build and operate,” says Qingsu Wu, principal group product manager in Microsoft Digital. “As adoption accelerates, Responsible AI becomes imperative and enables teams to innovate at speed while maintaining safety and accountability at scale. By embedding Responsible AI into our engineering practices, teams have the clarity and confidence they need to manage risk proactively and deliver value without compromising safety or trust.”

Far from thinking of responsible AI assessments as an administrative or policy burden that creates additional work, teams now recognize their benefits. They look at the process as an extra set of eyes from a trusted partner. By minimizing legal and compliance risks through our Responsible AI Council’s expertise, our teams save time and stress, and we avoid problems like delayed releases or rollbacks.

A photo of Smith.

“What we’re doing is entirely novel in the tech world. Microsoft is really the lead learner here, and we have a passion for corporate citizenship that we’re embedding in our tools.”

Jamian Smith, principal product manager and co-lead, Microsoft Digital Responsible AI team, Microsoft Digital

Lessons learned: Embedding responsible AI into our development efforts

Throughout this process, we’ve learned lessons that will be helpful for other organizations just beginning their AI journeys:

  • We empowered early adopters and enthusiasts as responsible AI champions. They act as anchors and resources for developers who use AI, so we made sure they had the knowledge and training they needed to unlock downstream value.
  • Culture has been crucial to our success, especially our growth mindset and our focus on trust. Emphasizing these aspects of our company culture helped us embed responsible AI into core SDL processes and naturalize it on our engineering teams.
  • Processes are one thing, and tooling is another. If your responsible AI assessment workflow isn’t attuned to your needs, simply building a review portal tool won’t get you the rest of the way. First, we thought about the process we needed to put in place to solidify responsible AI practices and support our teams’ work. Then we built a tool that supports those workflows as easily and seamlessly as possible.
  • Accuracy is reliant on data, and data has a tendency to reflect the biases of the humans who organize it. It’s necessary to correct bias actively through introspection and testing.

“What we’re doing is entirely novel in the tech world,” says Jamian Smith, principal product manager and co-lead for Microsoft Digital’s Responsible AI team. “Microsoft is really the lead learner here, and we have a passion for corporate citizenship that we’re embedding in our tools.”

As your organization begins to experiment with its own AI projects, take these concrete steps to infuse responsibility into the solutions you create:

  1. Establish a strong foundation based on core principles and standards that align with your organizational culture. The Microsoft Responsible AI Standard is a great place to start because it reflects our experience and the expertise we’ve built as AI technology leaders and providers.
  2. Seek out the activators across your organization: people with a passion for AI, security, transparency, and other challenge areas, along with a willingness to learn and the ability to lead. Think about how to place them in both centralized and distributed positions.
  3. With the rapidly evolving regulatory climate around AI, it’s crucial to have a broad understanding of compliance and continue to follow its developments. Involve dedicated regulatory, compliance, and legal professionals in researching and monitoring global standards while communicating that information to your organization, particularly through training and updates that help teams adapt new regulations into their core processes.
  4. Create a process for responsible AI assessment. Consider ways to break it into stages that propel projects forward rather than hindering them. Enlist the right people to assess projects, and consider tooling that streamlines actions for both creators and assessors. Our AI Impact Assessment Guide can help you get started.
  5. Benefit from pioneers in the space, including our experts at Microsoft. Our journey has produced ready-to-use resources that can accelerate your progress. Examples include our Responsible AI Toolbox for GitHub, hands-on tools for building effective human-AI experiences, and our AI Impact Assessment Template.

“It’s not about how fast you can move, but how prepared you are. Responsible AI processes might seem like speed bumps, but ultimately they’re accelerators.”

Naval Tripathi, principal engineering manager and co-lead, Microsoft Digital Responsible AI Team

Building your capacity to create AI tools responsibly won’t happen without careful planning and strategy. As part of that process, embed responsible AI into your development workflows by emulating the practices we’ve pioneered at Microsoft.

“It’s not about how fast you can move, but how prepared you are,” Tripathi says. “Responsible AI processes might seem like speed bumps, but ultimately they’re accelerators.”

By prioritizing responsible AI, businesses of all kinds, all over the world, can ensure that the AI revolution is a truly human movement.

Key takeaways

These insights can help you as you begin your own journey through responsible AI:

  • Realize that this isn’t just a technical transition. It’s also a gradual evolution and an ongoing journey.
  • Work with people across your organization to establish goals and standards, because different disciplines bring different expertise and insights to the table. This will also align your responsible AI standards with your organizational values.
  • Start with the basics and build from there. Establish principles, create processes, and construct tooling around those structures.
  • A wide array of tooling is readily available in the world of AI. Seek out providers that model responsible values.
  • Lean on your existing experts across privacy, security, accountability, and compliance. Their skills will be crucial in this new technological landscape.
  • Conducting your own responsible AI groundwork is crucial, but you can also partner with Microsoft. We run on trust, and we’ve thought about these issues to pave the way for your success. Follow our lead, consider the best ways to adapt our lessons to your organization, and come to us with questions.

The post Responsible AI: Why it matters and how we’re infusing it into our internal AI projects at Microsoft appeared first on Inside Track Blog.

]]>
19289
Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI http://approjects.co.za/?big=insidetrack/blog/accelerating-transformation-how-were-reshaping-microsoft-with-continuous-improvement-and-ai/ Thu, 26 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20297 Technology companies are really people companies. In an age of rapidly advancing AI, losing sight of this reality leads to an overemphasis on new tools while neglecting opportunities for the transformational change that AI offers. Moving forward, the winners will be the companies that prioritize technological and operational excellence. Microsoft Digital, our company’s IT organization, […]

The post Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI appeared first on Inside Track Blog.

]]>
Technology companies are really people companies. In an age of rapidly advancing AI, losing sight of this reality leads to an overemphasis on new tools while neglecting opportunities for the transformational change that AI offers.

Moving forward, the winners will be the companies that prioritize technological and operational excellence. Microsoft Digital, our company’s IT organization, is seizing this moment by reinventing processes for agentic workflows powered by continuous improvement (CI).

We believe that AI-powered agents, Microsoft 365 Copilot, and human ambition are the key ingredients for unlocking opportunity across every industry.

A photo of Laves.

“Continuous improvement is a natural, formal extension of our culture that applies rigor, structure, and methodology to enacting a growth mindset through understanding waste and opportunities for optimization.”

David Laves, director of business programs, Microsoft Digital

By combining our AI capabilities with continuous improvement, we’re executing initiatives that increase our productivity and improve our performance. We’re forging a new path for how companies operate in the era of AI.

Welcome to the age of AI-empowered continuous improvement.

Our vision for continuous improvement, turbo-charged by AI

At Microsoft Digital, we’re embracing continuous improvement to unlock greater operational excellence and better employee experiences.

“One of the main tenets of our culture at Microsoft is a growth mindset, and that involves experimentation and curiosity,” says David Laves, director of business programs within Microsoft Digital. “Continuous improvement is a natural, formal extension of our culture that applies rigor, structure, and methodology to enacting a growth mindset through understanding waste and opportunities for optimization.”

Our capacity to drive process improvements has been crucial to our AI transformation as a company. We’ve adopted a “CI before AI” approach to ensure that we don’t end up automating inefficient processes. By engaging in activities that focus on continuous improvement, our teams can better identify which problems to address with AI and prioritize meeting customer needs.

“Continuous improvement is really about understanding your business, its needs, and where you can find value,” says Matt Hansen, a director of continuous improvement at Microsoft. “It gives us the language to scale our efforts out across everything we do.”

This process isn’t just another way to enable AI. In fact, AI is essential to enabling continuous improvement itself.

A photo of Campbell.

“When leaders stay actively engaged and partner through these Centers of Excellence, we can create alignment, accelerate decisions, and ensure both CI and AI help to deliver measurable business outcomes.”

Don Campbell, senior director, Microsoft Digital

Operationalizing continuous improvement and AI

Operationalizing continuous improvement and AI enablement is a leadership imperative at Microsoft, and one that doesn’t just happen organically. As an organization, we are deliberate about turning business strategy into measurable outcomes through clear sponsorship, disciplined prioritization, the right resourcing, and sustained investment in change management and employee skilling.

“The difference between strategy and real business impact is execution,” says Don Campbell, a senior director in Microsoft Digital. “That execution requires strong leadership sponsorship and clearly designed continuous improvement efforts and AI Centers of Excellence (CoEs), which translate business strategy into operational reality. When leaders stay actively engaged and partner through these CoEs, we can create alignment, accelerate decisions, and ensure both CI and AI help to deliver measurable business outcomes.”

To support leadership’s vision, we’ve put organizational resources in place to manage our continuous improvement investments, guide practices, and support teams. There’s an overarching continuous improvement CoE within Microsoft Digital, which works in close partnership with the AI CoEs, forming an integrated model which connects enterprise priorities with frontline execution.

Together, these CoEs establish shared standards, provide clarity on where to invest, and help us move faster with confidence, turning ambition into sustained business impact.

A photo of West.

“Continuous improvement is about process, but it’s also about people.”

Becky West, lead, Continuous Improvement Center of Excellence, Microsoft Digital

Continuous improvement and people

As we build out the organizational structures that underpin our investment in continuous improvement, we’re approaching the people side of change with intention.

Currently, we’re undertaking skilling efforts and communicating with every employee about how their role fits into core continuous improvement tools, including bowler cards, Gemba walks, Kaizen events, and monthly business reviews. We’re also demonstrating how “CI + AI” is a powerful combination.

The roadmap is there, the structure is in place, and we’re already seeing progress.

“Continuous improvement is about process, but it’s also about people,” says Becky West, lead for the Continuous Improvement CoE within Microsoft Digital. “A guiding hand like the Continuous Improvement CoE is how you make sure those two components align.”

Three Microsoft Digital continuous improvement initiatives

As we negotiate the early days of the company’s continuous improvement journey, Microsoft Digital is becoming a proving ground for the larger CI framework we want to deploy across the company. Our teams are spearheading projects to bring this framework to diverse functions like asset management, incident response (with a designated responsible individual), and third-party software licensing.

Enterprise IT asset management

Microsoft Digital’s Enterprise IT Asset Management team oversees the 1.6 million devices that power the company, from servers and IoT devices to labs, networks, and 800,000 employee endpoints. Safeguarding this vast landscape is critical to enterprise cybersecurity.

Three security pillars form the foundation of our security efforts: protect, detect, and respond. All of these depend on a complete, accurate device inventory.

Unified visibility enables proactive protection through enforced security controls, improves detection by spotting anomalies and misconfigurations, and accelerates responses by reducing investigation and remediation time. Without this foundation, security teams lack the precision to execute effectively.

To reach the goal of a unified inventory, the team initiated a continuous improvement initiative to build a consolidated source of truth for Microsoft Digital IT assets. Grounded in the principle of “progress over perfection,” the team initially narrowed its focus to Microsoft Lab Services (MLS) and IoT devices, with a vision to eventually expand to networks, employee devices, conference rooms, and printers. The ultimate goal is to move toward a truly comprehensive inventory.

This foundation will not only enhance security but also deliver enterprise-wide value through consistent policy enforcement, more resilient infrastructure, and comprehensive lifecycle management. By applying continuous improvement processes to help prioritize high-impact opportunities and using AI to accelerate outcomes, the program is enhancing Microsoft’s operational excellence and security posture.

“It’s better to do step A than wait until you’re ready to do steps A, B, C, and D,” says Aniruddha Das, a principal PM in Microsoft Digital.

As the team progressed from Gemba walks to Kaizen events under the guidance of the Continuous Improvement CoE, they dug deeper into areas of waste. Then they identified potential actions, breaking them down into “value-add,” “non-value-add-but-essential,” and “non-value-add.”

A photo of Ashwin Kaul

“For every action item, we were always asking ourselves how we could make these things better through AI. We’re looking for ways to expedite our core outcomes with minimal human involvement.”

Ashwin Kaul, senior product manager, Microsoft Digital

This exercise helped them prioritize their activities and land on a starting point: A device security index that would provide an overview of our hardware environment’s security posture. Essentially, it would represent a list of device security statuses.

The team identified distinct improvement areas for IoT and Microsoft Lab Services (MLS) devices. For IoT devices, they needed to build the inventory from the ground up. MLS already had a fairly complete inventory of devices, so the team set a goal to improve data quality. Although each of these challenges is different, they’re excellent opportunities for AI-empowered continuous improvement.

Now that the project is underway, the team plans to use an AI agent to automate device registration for IoT devices, which currently relies on manually uploaded spreadsheets. It’s a prime example how streamlining a process with continuous improvement enables AI to automate and accelerate our work.

On the MLS side, the team is creating an AI-driven normalization tool to automate the de-duplication and correction of inaccuracies in device data. The goal is to get from less than 50% data quality to 100%, dramatically improving our security posture through greater accuracy.

“For every action item, we’re always asking ourselves how we can make these things better through AI,” says Ashwin Kaul, a senior product manager within Microsoft Digital. “We’re looking for ways to expedite our core outcomes with minimal human involvement.”

Continuously improving the designated responsible individual experience

On the Digital Workspace team, designated responsible individuals (DRIs) are in charge of maintaining the health of our production systems. When technical emergencies arise, they’re the rapid-response point people who take the lead.

A photo of Ajeya Kumar

“We asked ourselves, ‘How can AI elevate the designated responsible individual (DRI) experience to the next level?’”

Ajeya Kumar, principal software engineer, Microsoft Digital

That process itself can be incredibly stressful, and time is of the essence. When every moment counts, efficiency is key. Meanwhile, a big part of a DRI’s work is just finding out what’s gone wrong so they can fix the incident.

But their job isn’t just about crisis management. When there are no active incidents, they work on engineering enhancements to improve the efficiency of production systems and clear backlog projects.

There’s also a handover process that takes place when one DRI finishes their rotation and another goes on-call. That involves a report about any incidents that have occurred, active issues, actions taken, key metrics, and other important information.

With these two priorities in mind, our Digital Workspace team initiated a continuous improvement process review. Their Gemba walk provided a crucial starting point.

“The planning stage is all about figuring out what the process is, what it should be, and what we can do to improve it,” says Ajeya Kumar, a principal software engineer on the Digital Workspace team within Microsoft Digital. “We asked ourselves, ‘How can AI elevate the designated responsible individual (DRI) experience to the next level?’”

Collectively, the team decided to tackle these challenges with a multifunctional AI agent they call the Smart DRI Agent. This agent’s primary role would be synthesizing and presenting information to its human counterparts to help them save time in context-heavy situations.

The AI elements that the team has planned can be broken out into the following capabilities:

  • Text summarization: Going through logs and identifying key insights.
  • Data correlation: Tracking and collating error logs.
  • Automation: Updating the status of issues, keeping abreast of communications, and providing point-in-time, daily, and weekly summaries of system health.
  • Identifying patterns: Building troubleshooting guides based on frequency patterns.

The Smart DRI Agent is already in its pilot phase and producing results. It conducts four main activities:

  • AI-generated summaries of DRI actions.
  • Proactive notifications with AI-generated insights.
  • Chat support to assist with all kinds of DRI queries.
  • AI-generated handover reports.

“The continuous improvement framework that enables these pieces is the key to unlocking value,” says Aizaz Mohammad, principal software engineering manager on the Digital Workspace team. “It may seem process-heavy, but once you work through it, you’ll see the value.”

That value is apparent in their results.

In the first 30 days of the Smart DRI Agent’s pilot, there were 301 incidents, and the agent provided insights on 101 of them. That led to an approximate 100 hours of time savings for DRIs and a 40% improvement in our key network performance metric.

Third-party software license audits

Within Microsoft Digital, the Tenant Integration and Management team is responsible for a range of services, including third-party software licensing. This space is all about managing liability from both a security operations and an auditing perspective.

A photo of Hovhannisyan.

“It takes a tremendous amount of data and traversals through multiple sources to get us to the actionable data we need. The goal for this project is to reduce that time to increase operational efficiencies.”

Anahit Hovhannisyan, principal group product manager, Microsoft Digital

Without the proper security insights, the company could find itself with risks associated with third-party software vulnerabilities. And without thorough auditing, we might experience license overuse and contractual issues that can lead to waste or expensive license reconciliations.

“It takes a tremendous amount of data and traversals through multiple sources to get us to the actionable data we need,” says Anahit Hovhannisyan, a principal group product manager within Microsoft Digital. “The goal for this project is to reduce that time to increase operational efficiencies.”

A photo of Kathren Korsky

“It’s tough to be honest about what isn’t working, because it ties into people’s personal value and worth, but it’s essential to the process.”

Kathren Korsky, team lead, Software Licensing, Microsoft Digital

The team decided to target the auditing process first. Currently, the software licensing team performs audits manually by looking at entitlements, contracts, purchase orders, and more while liaising with suppliers and our Compliance and Legal teams. That’s incredibly time-consuming.

During the software licensing team’s planning phase, they developed an ambitious goal of reducing the time to insights on third-party software license data from 154 days down to 15 minutes. During their continuous improvement Kaizen event, the team uncovered opportunities for AI-powered process improvements that eliminate waste.

“It required a lot of courage as we were identifying waste,” says Kathren Korsky, Software Licensing team lead within Microsoft Digital. “People are very invested. It’s tough to be honest about what isn’t working, because it ties into people’s personal value and worth, but it’s essential to the process.”

Now, they’re building and implementing solutions, including an AI and data platform that provides business intelligence with custom reporting abilities, an AI agent that provides audit support and ticket creation, and another that automatically generates audit reports. The team has been using Azure Foundry and Azure AI services to create their agents because these tools have the flexibility to switch between different models and fine-tune their parameters.

As these agents emerge, they’ll take the most tedious and error-prone aspects of the process out of human auditors’ hands, freeing them up to focus on solving problems, not endlessly searching for them.

Realizing continuous improvement at scale

These are just a small selection of the many continuous improvement initiatives underway within Microsoft Digital and the company as a whole.

“What continuous improvement gives us is the macro vision and the micro actions we can do to accomplish our goals.”

Kirkland Barret, senior principal PM manager, Microsoft Digital

At Microsoft, most of our continuous improvement initiatives are in their initial stages. As they progress through the measurement and adjustment phases, two benefits will emerge.

First, we’ll iterate and improve the value that each individual initiative provides. Second, we’ll continue to build our discipline and cultural maturity around a growth mindset we’re operationalizing through continuous improvement.

“What continuous improvement gives us is the macro vision and the micro actions we can do to accomplish our goals,” says Kirkland Barrett, senior principal PM manager for Employee Experience in Microsoft Digital. “It’s about knowing our objectives, identifying upstream root causes, and rippling them throughout a mechanism of progress.”

Key takeaways

These tips for implementing a continuous improvement framework come from our own experiences at Microsoft Digital:

  • Be inclusive: Have the right subject matter experts at the table from the start. Sponsors need to be present as well.
  • Cultivate maturity and transparency: Objective analysis about how things are going requires honesty.
  • Sponsorship matters: Make sure you have sponsorship at the highest levels. This is a cultural change, and leadership is the core of culture.
  • No half-measures: If you’re going to identify opportunities for continuous improvement, commit to having budget and resources in place.
  • Process, then technology: Focus on what you need to simplify processes first, then apply AI. This will keep you from automating waste and inefficiency into your operations.

The post Accelerating transformation: How we’re reshaping Microsoft with continuous improvement and AI appeared first on Inside Track Blog.

]]>
20297
Mapping the Microsoft approach to accessibility in the world of AI http://approjects.co.za/?big=insidetrack/blog/mapping-the-microsoft-approach-to-accessibility-in-the-world-of-ai/ Thu, 19 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22756 More than 1 billion people worldwide have a disability, and 83 percent of people will experience a disability during their working age. As AI transforms how we build and experience technology, accessibility has to be built in from the start. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are […]

The post Mapping the Microsoft approach to accessibility in the world of AI appeared first on Inside Track Blog.

]]>
More than 1 billion people worldwide have a disability, and 83 percent of people will experience a disability during their working age.

As AI transforms how we build and experience technology, accessibility has to be built in from the start.

Designing with and for people with disabilities isn’t optional—it’s fundamental to building technology that works for everyone and to building trust at scale. And yet today, about96% of websites are still inaccessible.

At Microsoft, we’re committed to creating accessible products and services—designed with and for the disability community—that benefit everyone.

Our “shift left” approach to software production—which involves moving quality-assurance, testing, and accessibility checks to earlier in the development lifecycle—means that implementing assistive features and tools is a high priority for Microsoft, rather than a late-stage addition.

And with the rise in importance of AI tools and products, paying close attention to accessibility standards and building these key capabilities into game-changing tech like Microsoft 365 Copilot is a crucial part of our mission here in Microsoft Digital, the company’s IT organization.

A photo of Allen.

“After my accident, I became immediately reliant on accessible technology. Because I worked in tech, I could leverage accessibility features and assistive technologies to continue doing my job. It was literally a lifeline for me.”

Laurie Allen, accessibility technology evangelist, Microsoft

Evangelizing for accessibility

Laurie Allen is one person who knows first-hand the importance of accessibility in enterprise software. A little more than a decade ago, she experienced a spinal cord injury and became a quadriplegic.

Today, Allen works as an accessibility technology evangelist at Microsoft. Every day, she relies on assistive digital technologies to help her be successful in her role—which involves ensuring that our software products are accessible to everyone.

“After my accident, I became immediately reliant on accessible technology,” Allen says. “Because I worked in tech, I could leverage accessibility features and assistive technologies to continue doing my job. It was literally a lifeline for me during that transitionary phase, because my job was the one thing about my life that didn’t dramatically change as a result of the accident.”

The following graphic shows how widespread disability is around the globe: 

Shifting left for inclusivity

At Microsoft, our accessibility strategy includes such disability categories as mobility, vision, hearing, cognition, and learning—because accessibility empowers everyone.

A photo of Garg.

“We view accessibility as a quality of our software, not simply a feature. Like with security and privacy, we prioritize accessibility to ensure that people can effectively perceive and operate our products and services, delivering an inclusive experience for everyone.”

Ankur Garg, accessibility program manager, Microsoft Digital

We begin with the concept of “shift left,” which in this context means incorporating accessibility principles from the project’s outset, instead of waiting until a product is already built.

This strategy mirrors our approach in other key trust domains, such as security and privacy.

“We view accessibility as a quality of our software, not simply a feature,” says Ankur Garg, an accessibility program manager in Microsoft Digital. “Like with security and privacy, we prioritize accessibility to ensure that people can effectively perceive and operate our products and services, delivering an inclusive experience for everyone.”

Here in Microsoft Digital, that manifests as treating accessibility as a core requirement validated through rigorous internal testing of AI agents and embedding standards and inclusive design early in every tool’s development life cycle. We also use internal AI tools to streamline guidance and testing before expanding those practices across the company.  

Accessibility challenges in the age of AI

Technology is moving fast, especially with the advent of AI-powered tools. It’s easier than ever for companies and individuals to quickly generate and publish an app, website, or other digital product.

That means it’s also easier than ever before to create inaccessible software. It’s important to remember that much of the data that generative AI models have been trained on includes websites and apps that were built without considering accessibility guidelines.

A photo of Hirt.

“We want people with disabilities to be represented and see themselves in the technology we’re producing. We work with our AI models to make sure they have disability data in their training sets, so that the final product will reflect these values.”

Alli Hirt, director of accessibility engineering, Microsoft

As a result, we’ve found that many AI code-generation tools and models produce code that by default fail to meet Microsoft’s high standards for accessibility.

“We want people with disabilities to be represented and see themselves in the technology we’re producing,” says Alli Hirt, a director of accessibility engineering at Microsoft. “We work with our AI models to make sure they have disability data in their training sets, so that the final product will reflect these values.”

When we’re developing AI-driven products like Microsoft 365 Copilot, the tool must have comprehensive knowledge of different disabilities and be able to give appropriate, contextual help.

“Let’s say I tell Copilot, ‘I have a mobility disability; what software tools can I use?’” Allen says. “Copilot must recognize what a mobility disability is and identify which tools will support me. That’s the data representation we need in our AI models.”

Allen noted that sensitivity and bias are also big factors when creating these kinds of tools.

“Copilot should not respond with, ‘I’m sorry you have a disability,’” she says. “That’s the type of bias we’re working to train out of the models.”

Accessibility as a core commitment

When Satya Nadella became Microsoft CEO in 2014, he redirected the core mission of the company. The new vision was simple: To empower every person and every organization on the planet to achieve more. And accessibility is a core part of that mission.

“At Microsoft, accessibility is in our DNA. It’s who we are as a company.”

Laurie Allen, accessibility technology evangelist, Microsoft

Meeting global accessibility standards is our starting point. For example, the hub-and-spoke business model of the Accessibility Team helps ensure that accessibility is everyone’s responsibility.

The Microsoft Corporate, External, and Legal Affairs (CELA) group oversees accessibility across the company, helping products align with internationally recognized accessibility standards, such as Web Content Accessibility Guidelines (WCAG) and EN 301 549. These standards ensure that digital content, websites, and apps produced today are designed with accessibility in mind.

Understanding how products and services align to key accessibility standards and requirements is an important step in providing inclusive and accessible experiences.

“An organization’s accessibility program succeeds when it’s a priority at every level of the organization, starting with senior leadership,” Allen says. “At Microsoft, accessibility is in our DNA. It’s who we are as a company.”

Presenting content in a multimodal way

Here in Microsoft Digital, we embrace software products that provide our employees with a multimodal approach in presenting content. This means using more than one sense at the same time, like seeing, listening, reading, and speaking. This makes our products accessible to a diverse array of users, including people who learn and work in different ways. It lets our employees customize the way that works best for them.

“Seeing a visually impaired colleague demonstrate how he works—listening to a wiki being read at a speed that I could never follow—showed me exactly why accessibility is needed. It’s not just about being inclusive or compassionate; it’s a requirement for people to do their jobs.”

Eman Shaheen, principal PM lead, Microsoft Digital

For example, someone may not have a diagnosed disability, but they might be a better auditory learner than a visual learner.

This reflects what Eman Shaheen, a principal PM lead in Microsoft Digital, learned from a team member when observing how he used assistive technologies.

“Seeing a visually impaired colleague demonstrate how he works—listening to a wiki being read at a speed I couldn’t even follow—showed exactly why accessibility is needed,” Shaheen says. “It’s not just about being inclusive or compassionate; it’s a requirement for people to do their jobs.”

Here are some examples of multimodal accessibility capabilities offered by Microsoft 365 Copilot that are designed to support diverse user requirements:

Vision

  • Works with screen readers
  • Generates alt text for images
  • Suggests accessible layouts, textual contrast, and consistent structure in documents and slides

Hearing

  • Provides real-time meeting Q&A
  • Produces meeting recaps across multiple languages
  • Summarizes lengthy or fast-moving chats to aid comprehension

Cognitive and neurodivergent (ADHD, dyslexia, autism, executive function)

  • Simplifies complex language
  • Supplies task breakdowns and next-steps guidance
  • Offers tone assistance to help with understanding communication nuances

Mobility

  • Provides voice-driven productivity tools, such as speech to text creation
  • Reduces fine‑motor effort by automating lists, tables, and drafts
  • Supports meeting recordings to help compile notes and action items

Speech and communication

  • Drafts and rewrites content for users needing expressive support
  • Refines tone for clarity and empathy in written communication

Learning

  • Summarizes long content to reduce reading burden
  • Organizes notes into structured content

Mental health and fatigue

  • Assists with communication when cognitive energy is low
  • Provides adaptive communication assistance to help users express themselves confidently

How we demonstrate our accessibility vision

Here at Microsoft, we developed a strategic partnership with ServiceNow over the last five years. The two companies work together to accelerate digital transformation for our enterprise and government customers.

Through this partnership, we use the ServiceNow platform for internal helpdesk and ServiceDesk process automation, IT asset management, and integrated risk management.

A photo of Mazhar.

“The biggest shift happened once ServiceNow started feeling the same operational pain we felt. That’s when they began fixing accessibility issues proactively, which changed everything.”

Sherif Mazhar, principal product manager, Microsoft Digital

As part of this process, we uncovered 1,800 accessibility bugs (including 1,200 that were rated as high severity) in the platform—in our first assessment. By contrast, our most recent review found just 24 accessibility-related issues.

“The biggest shift happened once ServiceNow started feeling the same operational pain we felt,” says Sherif Mazhar, a principal product manager in Microsoft Digital, who oversees the company’s relationship with ServiceNow. “That’s when they began fixing accessibility issues proactively, which changed everything.”

The next major step for us is ensuring our ServiceNow platform updates aligns to WCAG 2.2 accessibility standards which will require reworking older versions of our products. However, doing this work helps us maintain momentum toward a world of more inclusive enterprise software in all lines of business and for all Microsoft customers.

What’s next in accessibility

Digital accessibility work is never done.

As new software and hardware are introduced, user needs and accessibility standards change and grow. At Microsoft, we are committed to making accessibility easier for everyone.

“Right now, we’re making sure every AI agent across Microsoft is tested with assistive technologies—like screen readers and keyboard navigation—to guarantee that the outputs are accessible and compliant,” Garg says.

This “shift left” mentality at Microsoft is ultimately about putting people first. It means that no one should have to wait for a late fix to be able to do their work, or simply to belong.

By embedding accessibility standards into product planning, instead of tacking it on as an afterthought just before (or even after) product launch, we’re helping ensure that these digital experiences will include everyone from day one.

“We may compete on products, especially in AI, but accessibility is a shared mission,” Allen says. “When the industry collaborates on inclusive technology, everyone wins.”

Key takeaways

Here are some tips to keep in mind as you consider your own accessibility strategy in a world of increasingly AI-driven technology:

  • Start with leadership. Championing accessibility from the C-suite signals that this is a top organizational priority.
  • Raise awareness with training. Set up employee learning opportunities regarding accessibility in AI tools and encourage everyone to take part.
  • Design with inclusivity in mind from day one (“shift left”). Incorporate accessibility from the beginning of the software creation process to make sure it isn’t lost in the shuffle of trying to ship a product on time.
  • Think inclusively. Run usability tests with people with lived experience
  • Treat accessibility as an ongoing practice. Digital accessibility work is never finished; document strategies and share your team’s learnings to keep improving iteratively as an organization.

The post Mapping the Microsoft approach to accessibility in the world of AI appeared first on Inside Track Blog.

]]>
22756
Powering the new age of AI-led engineering in IT at Microsoft http://approjects.co.za/?big=insidetrack/blog/powering-the-new-age-of-ai-led-engineering-in-it-at-microsoft/ Thu, 05 Mar 2026 17:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22539 When generative AI burst into the mainstream, it landed in our IT engineering organization like a shockwave. There was excitement, curiosity, skepticism, and no shortage of questions about what this technology meant for the future of IT. At Microsoft Digital—the company’s IT organization—we didn’t start with a grand transformation plan. Instead, we started with a […]

The post Powering the new age of AI-led engineering in IT at Microsoft appeared first on Inside Track Blog.

]]>
When generative AI burst into the mainstream, it landed in our IT engineering organization like a shockwave.

There was excitement, curiosity, skepticism, and no shortage of questions about what this technology meant for the future of IT.

At Microsoft Digital—the company’s IT organization—we didn’t start with a grand transformation plan. Instead, we started with a realization: AI wasn’t just another tool to roll out. It was a fundamental shift in how engineering work could happen.

For years, our IT teams have been focused on scale, reliability, and operational excellence. Those priorities didn’t change. What changed were the possibilities.

Suddenly, engineers could draft code in seconds, summarize complex systems instantly, or automate work that had once consumed hours or days. It was an opportunity to take the skills and capabilities of our people and amplify them with AI.

That realization forced us to step back and ask harder questions.

How do you help thousands of engineers understand what AI can actually do to impact their day-to-day work? How do you move from experimentation to trust? And how do you adopt AI in a way that strengthens engineering fundamentals instead of eroding them?

The answer came in the form of a phased journey grounded in people, culture, and continuous learning.

Phase 1: Awareness and access

It might sound surprising when speaking about engineering processes, but our first challenge wasn’t technology; it was understanding.

When generative AI entered the conversation, most engineers saw the headlines and dabbled in various tools, but few understood fully what it meant for their work. Some were excited, others were wary. Many simply didn’t know where to start. That gap between awareness and practical value was the first barrier we had to address.

We realized early that top-down mandates wouldn’t work. Telling engineers to “use AI” without context or relevance would only deepen skepticism. Instead, we focused on something both simpler and more difficult: Exposure.

We started by making AI visible and accessible in the tools engineers already used. GitHub Copilot. Microsoft 365 Copilot. Early copilots embedded directly into engineering workflows. The goal wasn’t immediate productivity gains. It was familiarity. Letting engineers see, firsthand, what AI could and couldn’t do.

A photo of Singhal.

“We encouraged tool usage and adoption so people would at least play around with AI. And once they did, they started seeing the value. That’s when the mindset shifted from ‘AI might replace me’ to ‘AI can be my companion.’”

Mukul Singhal, partner group engineering manager, Microsoft Digital

Just as important, we talked openly about limitations.

AI wasn’t perfect. It hallucinated. It made confident mistakes. And that honesty mattered. By framing AI as an assistant, we reinforced the role of engineering judgment. Engineers didn’t need to fear losing control. They needed to understand how to stay in control.

We also made experimentation safe.

No quotas. No forced adoption metrics. Engineers were encouraged to try AI on low‑risk tasks: summarizing documentation, generating test cases, or exploring unfamiliar codebases. Small wins built confidence, confidence built curiosity, and curiosity drove organic adoption.

As that experimentation took hold, the mindset began to shift.

“We encouraged tool usage and adoption so people would at least play around with AI,” says Mukul Singhal, a partner group engineering manager in Microsoft Digital. “And once they did, they started seeing the value. That’s when the mindset shifted from ‘AI might replace me’ to ‘AI can be my companion.’”

Over time, conversations changed from ‘Should we use AI?’ to ‘Where does AI help most?’

Engineers began sharing prompts, tips, and lessons learned with one another. What started as individual exploration turned into community learning. Awareness gave way to momentum.

Phase one was about providing access to explore, to question, and to learn. And that foundation made everything that followed possible.

Phase 2: Culture shift

Access created awareness and awareness created curiosity.

As more engineers began experimenting with AI, we noticed a pattern. Some teams were moving faster, learning faster, and reducing friction in their day‑to‑day work. Others stalled after initial trials. The difference wasn’t technical skill or capability, it was mindset.

A photo of Mamilla.

“People started shifting from the mindset of ‘Will AI work?’ to ‘AI is working for me.’ I think that was a very transformational shift, to where I believe a lot of engineers in the organization started believing in AI.”

Veera Mamilla, principal group engineering manager, Microsoft Digital

To move forward, we had to shift how AI was perceived from something optional or experimental to something that was simply part of how modern engineering gets done.

That meant normalizing AI as a trusted partner in the engineering process.

Leaders played a critical role in that shift. Rather than positioning AI as a productivity shortcut, they framed it as a way to strengthen engineering fundamentals: clearer design discussions, better documentation, faster feedback loops, and more time for deep problem‑solving. The message was intentional and consistent. Using AI wasn’t about cutting corners, it was about reimagining how work gets done.

We also had to address a fear that surfaced early: that AI adoption was a signal of replacement rather than empowerment.

“People started shifting from the mindset of ‘Will AI work?’ to ‘AI is working for me,’” says Veera Mamilla, a principal group engineering manager in Microsoft Digital. “I think that was a very transformational shift, to where I believe a lot of engineers in the organization started believing in AI.”

That framing mattered.

As engineers incorporated AI into their workflows, success stopped being measured by output alone. The focus shifted to outcomes. Did AI help you understand a system faster? Did it surface risks earlier? Did it free up time to focus on higher‑value work?

Over time, AI stopped feeling like a novelty. It became part of the engineering fabric. We reinforced it through leadership modeling, peer learning, and shared success stories. Teams no longer asked whether AI belonged in their workflows. They asked how to use it responsibly and effectively.

Phase 3: Upskilling and role evolution

Once AI moved from curiosity to expectation, the challenge of skill building became unavoidable.

From the start, we made a deliberate choice: This would be an upskilling and reskilling journey, not a wholesale replacement of roles. The goal wasn’t a new workforce. It was an investment in the one we had.

That decision shaped everything that followed.

Early upskilling efforts focused on practical entry points. Prompt engineering. Tool literacy. Understanding how copilots and early agents behaved in real engineering workflows. We treated these as something every engineer needed to experiment with, regardless of discipline.

But it quickly became clear that skills alone weren’t the full story. Roles themselves were starting to evolve.

A photo of Singh.

“Your title might still be software engineer or principal engineer. But if you’re acting like an AI engineer, what does that actually mean? That question helped us start defining how these roles were evolving.”

Ragini Singh, partner group engineering manager, Microsoft Digital

Across software development, service engineering, and cloud network engineering, the work was shifting from manual execution toward orchestration and oversight. Engineers were no longer expected to do every task end‑to‑end by hand. Instead, they were learning how to guide AI, review its output, and decide where automation made sense and where it didn’t.

As part of this shift, we began researching how the industry itself was redefining engineering roles. Leaders examined emerging job descriptions from across the market and compared them with Microsoft’s own role frameworks. At the time, there was no formal “AI engineer” role in the internal job library. Rather than creating a new title, the focus stayed on evolving expectations within existing roles.

The idea of an “AI‑native engineer” emerged not as a job description, but as a mindset.

An AI‑native engineer still understands systems, architecture, and risk. What’s different is how that expertise gets applied. Routine tasks are delegated to AI. Judgment, design, and accountability stay with the human. Engineers move from doing all the work themselves to supervising work done in partnership with AI.

“Your title might still be software engineer or principal engineer,” says Ragini Singh, a partner group engineering manager in Microsoft Digital. “But if you’re acting like an AI engineer, what does that actually mean? That question helped us start defining how these roles were evolving.”

This evolution looked different across disciplines. Software engineers focused on AI‑assisted coding, test generation, and spec‑driven development. Service engineers leaned into AI for incident response, knowledge capture, and operational decision support. Cloud network engineers began moving from manual intervention toward intelligent orchestration and agent‑assisted troubleshooting. The common thread wasn’t identical tooling, it was a shared shift toward higher‑order work and reduced toil.

Phase 4: Embedding AI across the engineering lifecycle

By this phase, we knew individual productivity gains were simply the starting point for larger and broader benefits.

Early on, most AI usage showed up in familiar places: Code suggestions, documentation summaries, quick answers. Useful, but fragmented. The bigger opportunity emerged when we stepped back and asked a harder question: What would it look like if AI were embedded across the entire engineering lifecycle, not just used at isolated moments?

We stopped thinking in terms of tools and started thinking in terms of flow. Design. Build. Test. Deploy. Operate. Improve. AI needed to show up across all of it, in ways that reinforced how engineers already worked.

A photo of Sadasivuni.

“If AI is only showing up at one step, you don’t get the full value. The real impact comes when it’s integrated across the lifecycle, where engineers can design, build, operate, and learn faster as a system.”

Sudhakar Sadasivuni, principal group engineering manager, Microsoft Digital

In software engineering, that meant pulling AI earlier into the process. We began using it to help draft requirements, reason through design options, and review code with broader system context to accelerate how quickly we could get to informed decisions. Coding assistance mattered, but it was no longer the center of gravity.

Testing and quality followed a similar pattern. AI supported test generation, defect analysis, and code review, reducing repetitive effort and helping issues surface sooner. That gave engineers more time to focus on quality and architecture instead of cleanup.

In service engineering, we embedded AI into incident management and operational workflows. Engineers used it to summarize incidents, surface relevant knowledge, and analyze signals across systems. In cloud network engineering, AI helped shift work away from manual intervention toward orchestration and intelligent troubleshooting. Across disciplines, the principle stayed the same: AI should reduce friction, not introduce it.

As we scaled this approach, one thing became clear. Embedding AI wasn’t just a technical exercise. It was a systems change.

“If AI is only showing up at one step, you don’t get the full value,” says Sudhakar Sadasivuni, a principal group engineering manager in Microsoft Digital. “The real impact comes when it’s integrated across the lifecycle, where engineers can design, build, operate, and learn faster as a system.”

As AI became part of core workflows, engineers remained accountable for outcomes. AI output was reviewed, tested, and validated like any other engineering input. Embedding AI didn’t lower the bar for rigor. It raised expectations around judgment, oversight, and data quality. We became more deliberate about responsibility and governance.

Over time, these integrations created compound benefits.

Faster design cycles reduced downstream rework. Better testing lowered operational noise. Improved operational insight shortened recovery times. AI stopped being something we used occasionally and became something the engineering system itself was built around.

Phase 5: Eliminating toil and accelerating outcomes

At some point, every AI story hits the same test. Does it actually make engineers’ days better? For us, that proof showed up fastest in elimination of toil.

Across Microsoft Digital, engineers have always spent time on work that was necessary but draining. It included tasks such as manual troubleshooting, repetitive diagnostics, log analysis, and routine operational tasks that kept systems running but didn’t move the organization forward.

AI gave us a chance to change that.

A photo of Garrison.

“Toil reduction is the biggest thing. That’s where engineers’ eyes light up. If we can eliminate toil, people engineers will flock to use AI. I really believe it.”

Beth Garrison, principal cloud network engineer, Microsoft Digital

In cloud network engineering, for example, troubleshooting used to require manually reconstructing what happened, such as logging into devices, chasing configurations, and piecing together context after the fact. As we began introducing agents and machine learning into these workflows, that work shifted. Instead of spending time assembling the picture, engineers could generate the views they needed faster and focus on resolving issues.

The same shift showed up in how we used operational data.

Rather than reacting to incidents after impact, we started using machine learning to analyze logs, identify patterns, and surface anomalies earlier. That moved teams from reactive response toward proactive monitoring and prevention.

One thing became clear very quickly: Toil reduction wasn’t just a benefit; it was the catalyst for adoption.

“Toil reduction is the biggest thing. That’s where engineers’ eyes light up,” says Beth Garrison, a principal cloud network engineer at Microsoft Digital. “If we can eliminate toil, people engineers will flock to use AI. I really believe it.”

Service engineering followed a similar arc.

Across governance, operations, productivity, and cost management, we began applying agents and automation to simplify complex work and reduce manual review cycles. Governance and compliance workflows became faster and more consistent. Operational processes benefited from guided remediation and earlier insight. Knowledge capture improved as documentation and remediation guidance could be generated and updated automatically.

When we removed repetitive work such as manual triage, rote diagnostics, endless documentation cleanup, we transformed how engineers spent their time. More focus on design. More proactive problem‑solving. More energy directed toward improving systems instead of just maintaining them.

Toil reduction made the value of AI tangible. It’s the moment AI stopped being interesting and became indispensable, and our engineering teams started asking where else we can apply it next.

Measuring what matters

By the time AI was embedded across our engineering lifecycle, a new question came into focus: “How do we know it’s working?”

In the early days, we paid close attention to usage. Which tools engineers were trying, where adoption was growing, or where it stalled. Those signals mattered and adoption was the leading indicator that people were getting comfortable and starting to integrate AI into real work.

“Adoption was always the starting point. But we were clear from the beginning that usage isn’t the destination. The real goal is impact; more time for engineers to focus on the work that truly matters.”

Ullas Kumble, principal group software engineering manager, Microsoft Digital

But using AI doesn’t automatically mean better outcomes. So, we shifted the conversation and started asking, “What’s different now that our engineers are using AI?”

That change reframed how we thought about measurement. We began looking beyond tool activity to understand impact across the engineering system. Faster design cycles. Earlier defect detection. Reduced time spent on repetitive operational work. Shorter incident resolution. Clearer documentation. Fewer handoffs. Less rework.

These weren’t abstract metrics. They showed up in the flow of work.

We were intentional about not forcing a single definition of value across every role. Software engineers, service engineers, and cloud network engineers experience impact differently. What mattered was that each team could point to tangible improvements in how work moved through the system.

That perspective shaped how leadership talked about success.

“Adoption was always the starting point,” says Ullas Kumble, a principal group software engineering manager at Microsoft Digital. “But we were clear from the beginning that usage isn’t the destination. The real goal is impact; more time for engineers to focus on the work that truly matters.”

Over time, this approach changed the quality of our conversations. Instead of debating whether AI was worth the investment, teams talked about where it was removing friction and where it still wasn’t delivering enough value. Measurement became a tool for learning and prioritization.

Moving forward

Looking ahead, one lesson stands out: this journey isn’t complete.

AI tools will continue to evolve. Agents will become more capable. Roles will keep shifting. What it means to be an engineer will continue to change. And that means our approach must stay grounded in the same principles that guided us from the start: invest in people, reinforce fundamentals, embed AI into real workflows, and stay honest about what’s working and what isn’t.

We didn’t set out to build an AI‑driven engineering organization overnight, we built it phase by phase.

By meeting engineers where they were
By reshaping culture before redefining roles.
By embedding AI across the lifecycle, not bolting it on.
By reducing toil and measuring impact where it mattered most.

The result is better engineering: powered by AI, guided by human judgment, and built to keep evolving.

Key takeaways

Here’s a set of approaches you can take to establish AI-led engineering for your organization:

  • Start with access and understanding. Give engineers safe, easy access to AI in the tools they already use so curiosity and confidence can develop organically before you push for outcomes.
  • Frame AI as a partner, not a replacement. Position AI as an assistant that strengthens engineering judgment and fundamentals rather than a shortcut or a threat to roles.
  • Normalize experimentation without pressure. Encourage low‑risk experimentation and peer sharing instead of mandates, allowing adoption to grow through visible, practical wins.
  • Invest in upskilling. Focus on evolving skills and expectations within existing roles so engineers learn how to guide, review, and stay accountable for AI‑assisted work.
  • Embed AI across the full engineering lifecycle. Look beyond isolated productivity gains and integrate AI into design, build, test, operate, and improve workflows to unlock system‑level impact.
  • Measure impact where engineers feel it. Move past usage metrics and track outcomes like reduced toil, faster feedback, and improved flow so teams can see where AI is truly making work better.

Try it out

Try GitHub Copilot.

The post Powering the new age of AI-led engineering in IT at Microsoft appeared first on Inside Track Blog.

]]>
22539
The Frontier Firm: How knowledge workers are forging their own AI tools at Microsoft http://approjects.co.za/?big=insidetrack/blog/the-frontier-firm-how-knowledge-workers-are-forging-their-own-ai-tools-at-microsoft/ Thu, 05 Mar 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22549 Knowledge workers have all been there. Maybe you’re a product manager with a backlog that you can’t ever get to. Perhaps you’re a designer who can never seem to get engineering resources assigned to you. Or maybe you’re a program manager who routinely gets stuck copying data between systems by hand. Engage with our experts! […]

The post The Frontier Firm: How knowledge workers are forging their own AI tools at Microsoft appeared first on Inside Track Blog.

]]>
Knowledge workers have all been there.

Maybe you’re a product manager with a backlog that you can’t ever get to. Perhaps you’re a designer who can never seem to get engineering resources assigned to you. Or maybe you’re a program manager who routinely gets stuck copying data between systems by hand.

These are common challenges knowledge workers face everywhere, including here at Microsoft. A year ago, AI enthusiasts knew agents with tools could fix these problems—they just didn’t know where to start.

Some of our employees in Microsoft Digital, the company’s IT organization and Customer Zero for the company, took a grassroots approach to solving this problem. They built something called the Frontier Forge, our pro‑code “harness” that enables our less-technical employees to get work done with agents. They use it to quickly build agentic instructions and instantly share their solutions with peers, which accelerates our productivity across the company.

The Frontier Forge represents a cultural shift in how our product managers, designers, program managers and other “I’m not an engineer but I want to build stuff” employees now apply AI tools directly to their work.

What first began as a hackathon experiment has evolved into a thriving Microsoft-internal community with nearly 100 engaged contributors, an active Teams channel, and a GitHub repository filled with templates, learning modules, and ready-to-use AI agents. The impact is measurable: Forecasting, backlog grooming and communication tasks that collectively took weeks now take hours or minutes.

A photo of Reifers.

“I saw myself and others spending too much of our time on data wrangling and admin tasks when we wanted to be strategizing. Nobody was building what felt truly agentic. So, we did it ourselves.”

Brett Reifers, senior product manager, Microsoft Digital

Employees who never saw themselves as technical are now building sophisticated data visualizations, automating workflows, creating prototypes, and generating learning modules. These were capabilities previously reserved for specialized engineering teams.

The “Forge” is where it’s all happening now.

From a hackathon to a movement

In early 2025, Brett Reifers, a senior product manager in Microsoft Digital, spotted a problem he couldn’t ignore. His peers, smart and driven product managers, kept asking the same question: “How do I use agents for my actual work?”

Beginner tutorials about prompt engineering felt trivial. Advanced agents with tools assumed engineering expertise. The middle ground, where AI meets real jobs, didn’t exist.

“I saw myself and others spending too much of our time on data wrangling and admin tasks when we wanted to be strategizing,” Reifers says. “Nobody was building what felt truly agentic. So, we did it ourselves.”

So, Reifers partnered with colleague Humberto Arias, a senior product manager in Microsoft Digital whose work explores the intersection of AI and productivity. Arias had been independently researching agentic solutions that could click through interfaces, open applications, and complete tasks autonomously.

The insight that unlocked everything came from a deceptively simple observation:

“Everything on the internet is a form—every site, mobile app, every click,” Reifers says. “If agents could fill out my forms in Azure DevOps, they could handle any web-based task.”

They pitched the concept of Copilot fulfilling form-based processes as an entry for Microsoft’s annual hackathon to Sean MacDonald, partner director of product management in Microsoft Employee Experience. MacDonald immediately recognized its potential.

“My reaction was simply, ‘This sounds amazing,’” MacDonald says. “This solution was exactly what we needed.”

The event proved agents could automate PM workflows: managing Azure DevOps items, generating summaries, and querying data systems. After the hackathon validated the concept, Arias suggested pushing the project to GitHub for wider exposure. Reifers then used GitHub Copilot itself, recursively using the very tools they were building, to open source the first Frontier Forge repository in 15 minutes.

A pro-code environment with natural language accessibility

The Forge combines GitHub Copilot, Visual Studio Code (VS Code), and MCPs into a framework that makes professional development tools easily accessible to non-engineers.

A photo of MacDonald.

“The Frontier Forge is a place where you can learn regardless of your skill level. You can adopt what’s out there, even if you don’t know where to start.”

Sean MacDonald, partner director of product management, Microsoft Employee Experience

The core idea: Give employees a workspace seeded with community-created templates, learning modules, and custom agents tailored to Microsoft Digital contexts. Then let them build from there.

For MacDonald, the Forge has proven to be an accessible entry point for almost anyone, regardless of experience.

“The Frontier Forge is a place where you can learn regardless of your skill level,” MacDonald says. “You can adopt what’s out there, even if you don’t know where to start.”

Screenshot showing GitHub Copilot connecting with VS Code.
GitHub Copilot connects chat to VS Code’s built-in and MCP tool capabilities. The custom agents and skills in the workspace can all benefit from contextual access to the right tools for the right job.

An architecture for context-first AI

The technical architecture of The Frontier Forge leverages three layers simultaneously:

  • VS Code provides the enterprise managed workspace where everything happens.
  • GitHub Copilot offers chat functionality and AI assistance, with access to multiple models including Claude, GPT, and Gemini.
  • Tools like Model Context Protocols (MCPs) act as standardized connectors that let agents access tools, data, and services locally. This unlocked what Copilot could decide and do with user approval.
A photo of Arias.

“With GitHub Copilot and MCPs, there are literally no boundaries. It’s hard to explain just how transformational this can be for a product manager. Whatever you ask is transformed into code with a purpose, allowing you to do something you couldn’t before.”

Humberto Arias, senior product manager, Microsoft Digital

The MCPs connect to services like Azure DevOps (for roadmap planning and backlog management), Microsoft Documentation, Figma (for design work), and dozens of other platforms that are essential to product manager workflows. New MCPs appear daily, expanding capabilities organically as the community builds them.

Employees can even ask GitHub Copilot to build custom MCPs for services lacking official integrations. When Arias needed a PowerPoint creator that didn’t exist, he asked GitHub Copilot to create one.

“With GitHub Copilot and MCPs, there are literally no boundaries,” Arias says. “It’s hard to explain just how transformational this can be for a product manager. Whatever you ask is transformed into code with a purpose, allowing you to do something you couldn’t before.”

The shift from prompt engineering towards context engineering is another reason why the Forge works. Its workspace settings, agent instructions, skills and hooks provide a harness with guardrails that help colleagues adopt and use this.

The Forge provides a curated starting point: Microsoft Digital-specific templates, governance frameworks, security guidelines grounded in Microsoft’s Responsible AI framework, and working examples employees can immediately use and modify.

Transformational impact

The productivity gains generated by The Frontier Forge are very real. Our employees report saving weeks or even months on certain projects, especially those that previously required extensive manual work or specialized technical skills.

Case in point: Laura Oxford, a senior content program manager in Microsoft Digital, had four years’ worth of Excel files and communication metrics reports. She had always intended to use the data to create marketing forecasts, but she could never find the necessary time or resources to perform the analysis.

A photo of Oxford.

“The key to creating the agent was going deep into the context. It was an iterative conversation, going back and forth to fine-tune the agent until I was consistently getting the output I wanted. But it truly was just a conversation—no tech skills needed.”

Laura Oxford, senior content program manager, Microsoft Digital

Through iterative, conversation-based prompting, Oxford’s agent analyzed patterns, created projections, and produced visualizations. Oxford now has a robust historical analysis that enables prediction of future campaign performance.

“The key to creating the agent was going deep into the context,” Oxford says. “It was an iterative conversation, going back and forth to fine-tune the agent until I was consistently getting the output I wanted. But it truly was just a conversation—no tech skills needed.”

Drafting clear, executive-ready communications for complex initiatives was what brought Mark Stratford, a senior product manager with the email and calendaring service team in Microsoft Digital, to the Forge.

Before the Forge, communicating status updates to leadership meant he had to manually synthesize data from CSVs, track several approval chains at once—often in messy emails—and iterate on visualizations for what seemed like days and days.

Put more succinctly, these tasks are time-consuming chores that are perfect for AI.

“The Forge’s architecture changes how you think about the problem,” Stratford says. “Instead of iterating on prompts, you declare intent and desired outcome. The Forge’s architecture handles the rest.”

Using this pattern, Stratford created:

  • Over a dozen interactive dashboards for portfolio roadmaps, migration tracking, and service health monitoring.
  • Approval matrix visualizations mapping multi-stakeholder sign-off dependencies.
  • Data analysis pipelines transforming raw telemetry into executive-ready narratives.
A photo of Stratford.

“I didn’t need to fight ambiguity or handhold the model. The architecture gave the agent a stable, skills-driven foundation from the start, which dramatically accelerated development time and improved clarity.”

Mark Stratford, senior product manager, Microsoft Digital

The Forge’s clean separation between intent, constraints, tools, and data inputs eliminated the prompt-tuning loop. Stratford mapped his objectives into the agent framework once, relying on built-in structure and guardrails.

His analysis and drafting time dropped from days to minutes. Outputs like roadmaps and data visualizations went directly into decision workflows with no manual cleanup required.

“I didn’t need to fight ambiguity or handhold the model,” Stratford says. “The architecture gave the agent a stable, skills-driven foundation from the start, which dramatically accelerated development time and improved clarity.”

Building community and sharing knowledge

A simple continuously improving repository has grown into something larger: a community of nearly 100 enthusiasts. Contributors are building templates, learning modules, and specialized MCPs tailored to their job functions. Teams are sharing wins and unlocked achievements.

“At its core, The Frontier Forge is an open-source, community‑driven experience. It’s a safer environment that will help people learn and apply Microsoft’s AI at work.”

Brett Reifers, senior product manager, Microsoft Digital

The Forge succeeds because of its emphasis on community and knowledge sharing. Its GitHub repository serves as collaborative workspace where employees contribute agents, templates, and learning resources.

This sharing culture creates a compounding cycle. One employee’s outcome becomes another’s starting point. Contributors share useful agents immediately, without lengthy approvals. This grassroots approach lets innovation spread at the pace of curiosity.

“At its core, The Frontier Forge is an open-source, community‑driven experience,” Reifers says. “The Forge is a safer environment that will help people learn and apply Microsoft’s AI at work.”

Building a safe-to-fail path

For IT leaders looking to replicate something like the Forge, MacDonald’s guidance starts with reframing the challenge.

“Find the people who are super curious and who want to learn. They will be the ones who drive innovation with AI agents and other newly developed tools.”

Sean MacDonald, partner director of product management, Microsoft Employee Experience

The barrier to agent adoption for non-engineering roles isn’t access to tools. It’s all about giving them the confidence needed to build them and then put them to work. Providing a safe, hands-on environment where people can learn at their own pace, regardless of skill level, has been an essential key to success.

Another key has been to empower the people in your organization who are eager to innovate and try new things. The Forge began with two curious product managers who decided to experiment and then shared their idea with peers.

“Find the people who are super curious and who want to learn,” MacDonald says. “They will be the ones who drive innovation with AI agents and other newly developed tools.”

For IT leaders currently trying to prepare their organizations for an AI-driven future, the story shows that the answer isn’t to wait around for perfect tools or comprehensive employee training.

“The leaders that create safe spaces for non-engineers to build with AI now will compound that advantage for years,” Reifers says. “The ones that wait will spend 2027 trying to catch-up.”

Our knowledge workers don’t need to wait for help any longer, now they can forge their own path with an agent or other AI tool they build themselves.

Key takeaways

Here are some insights your leaders can use to build grassroots-led, AI-forward communities in your organization:

  • Start with volunteers, not mandates. The Forge grew to 100 contributors with zero top-down requirements. Organic growth from curious employees creates sustainable adoption.
  • Highlight your quick wins. Reifers’ and Arias’ live demos of MCPs, Oxford’s 90-minute forecast and Stratford’s 20-minute drafts became the recruiting pitch for the next wave of adopters. Show your people results like these, then hand them the tools.
  • Lower barriers without lowering standards. Accessibility and quality aren’t mutually exclusive. Governance and security are non-negotiable. Configure it all into the harness.
  • Prioritize knowledge sharing and attribution. When one person solves a problem and shares it, dozens benefit immediately. Reward provenance.
  • Ship fast, improve later. The Forge repo was built in 15 minutes. Four months later, it contained 50+ templates and agents. As much of 80% what is produced in the Forge is rewritten every other week as tools evolve. Ship MVPs and evolve based on real usage.
  • Reframe outcomes > tools. Shifting from “developer tool” to “Copilot workspace” helps knowledge workers see they belong.

The post The Frontier Firm: How knowledge workers are forging their own AI tools at Microsoft appeared first on Inside Track Blog.

]]>
22549
Read our seven tips for shifting to a ‘cloud native’ device management strategy http://approjects.co.za/?big=insidetrack/blog/read-our-seven-tips-for-shifting-to-a-cloud-native-device-management-strategy/ Thu, 19 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22433 At Microsoft, we manage a large, diverse device estate, with more than 1 million devices in use by employees and teams across our global corporate network. For years, we stitched together insights across multiple tools, wrote custom queries, and maintained fragile reports just to answer basic questions. This approach slowed investigations and delayed patch targeting. […]

The post Read our seven tips for shifting to a ‘cloud native’ device management strategy appeared first on Inside Track Blog.

]]>
At Microsoft, we manage a large, diverse device estate, with more than 1 million devices in use by employees and teams across our global corporate network.

For years, we stitched together insights across multiple tools, wrote custom queries, and maintained fragile reports just to answer basic questions. This approach slowed investigations and delayed patch targeting.

We needed a faster, stronger, cloud-native path.

We’re investing in AI-powered predictive maintenance and intelligent troubleshooting to reduce friction in device management.”

Daniel Manalo, principal service engineer, Microsoft Digital

The advent of generative AI changed the way we manage our devices. Not only were we able to ask better questions and get targeted help right from the start, we also got faster and more relevant answers from across our entire device management estate.

It’s simpler. It’s faster. It scales with our environment. And we’re doing it natively in the cloud.

“We’re investing in AI-powered predictive maintenance and intelligent troubleshooting to reduce friction in device management,” says Daniel Manalo, a principal service engineer in Microsoft Digital, the company’s IT organization.

AI and machine learning help us find errors faster and fix them autonomously, in many cases. It reduces our downtime, prolongs lifespans of our devices, and ensures our employees have a consistent and productive experience with their devices.

Today, we’re applying this approach to everyday operations: Speeding investigations, simplifying updates, and tightening the loop from detection to remediation. The overarching goal remains consistent—reduce workloads, improve clarity, and move our discoveries to earlier in the risk window.

The role of Customer Zero in evolving modern device management

We serve as the company’s Customer Zero for our products here in Microsoft Digital. We run early capabilities in our own tenant, pressure‑test them at Microsoft scale, and feed what we learn straight back to engineering. The goal is simple: Turn good ideas into reliable features that any enterprise can use.

A photo of Selvaraj.

“We use our collective learnings from our internal deployments to improve our products, which makes them better for our employees and for our customers.”

 Senthil Selvaraj, principal group product manager, Microsoft Digital

Our Microsoft Digital teams work side-by-side with the Intune product group to modernize our device management approach. The Intune group builds and operates the platform, while we bring real‑world scenarios, signals, and guardrails. Together, we help develop, test, and deploy a better cloud-native product for our customers.

“We use our collective learnings from our internal deployments to improve our products, which makes them better for our employees and for our customers,” says Senthil Selvaraj, a principal group product manager in Microsoft Digital.

For the same reasons, we work hard to make sure that we deploy our tools and services in the same way our customers do.

“That enables everyone at the company to have good visibility into the experiences our customers will have when our products get to them,” Selvaraj says. “This makes us more accountable to our customers and helps us move quickly when improvements are needed.”

Customer Zero for device management spans more than Intune.

We partner across teams responsible for Microsoft Purview, Microsoft 365 Copilot, Microsoft Defender, Windows (Autopatch and Hotpatch), GitHub, and Microsoft Azure to produce comprehensive device management capabilities. These are the surfaces where we test, learn, and refine the end‑to‑end device management experience.

The loop is tight. We identify a need, prototype a solution with the product groups, roll it out to targeted rings, measure impact, and iterate. Those learnings inform what ships in Intune—from data-driven insights to built‑in prompts that surface device health data as a conversation, rather than a simple query.

“Using natural language reduces the time it takes us to figure out what’s going on. We are able to ask Security Copilot questions naturally, which allows us to hear the signals that need our immediate action faster.”

Mohit Malhotra, product manager, Microsoft Digital

The result is a safer, faster path to value with AI-driven device management, including clear ownership, faster remediation, and features that arrive tested against operational reality.

We’ve learned a lot as Customer Zero, and we’re passing those lessons on to you.

Modern device management: Seven tips

Here are seven important tips that we’ve compiled to help with your device management efforts.

Tip 1: Ask natural-language questions with Microsoft Security Copilot

We use the generative AI capabilities in Microsoft Security Copilot to query device and vulnerability data in plain language and get a unified answer that we can act on.

This allowed us to replace bespoke reports with targeted questions.

“Using natural language reduces the time it takes us to figure out what’s going on,” says Mohit Malhotra, a product manager in Microsoft Digital. “We are able to ask Security Copilot questions naturally, which allows us to hear the signals that need our immediate action faster.”

Security Copilot lets us ask about device posture, app versions, cybersecurity vulnerabilities (known as Common Vulnerabilities and Exposures, or CVEs), and exposure across Microsoft Defender and Intune, without stitching the data together by hand. We get the context we need and move faster from finding to fixing.

How we use it

  • Scope impact: “List Windows devices running <app/version> that are vulnerable, with owners and deployment rings.”
  • Prioritize work: “Group affected devices by business unit and model; show counts and severity.”
  • Verify reach: “Confirm which devices received <policy/package> in the last 48 hours; flag failures.”

Prompts we rely on

  • “Show devices affected by <CVE/app version> and summarize recommended remediation steps.”
  • “Break down exposure by ring and list top 5 models with highest risk.”
  • “Identify outliers that failed the last policy sync and provide reasons.”

Why it helps

  • Less toil: No custom pipelines to maintain.
  • Faster triage: Discovery and scoping happen in one interaction.
  • Clear next steps: Results align to our Intune targeting and scheduling paths.

Best practices

  • Start specific: Name the product, version, and time window, then broaden as needed.
  • Keep follow‑ups short: Quick pivots like “group by region” or “add owner emails” maintain momentum.
  • Act on the output: Use the device lists to target updates or policies in Intune, then validate results with a final check.

Note

  • We align usage with least‑privilege access and established approval paths so insights come from authoritative sources and actions land through the right channel.

Tip 2: Find knowledge fast with Microsoft 365 Copilot

We use Microsoft 365 Copilot to pull device context from email, chats, and documents, allowing us to troubleshoot issues faster and easier using generative AI.

Incidents start with questions, not dashboards, e.g. “Who owns this package? When did we change that policy? Where did we discuss the driver rollback?”

The answers to those questions live in mail threads, Teams chats, and planning docs. Before Copilot, we were forced to sift through these materials manually, which cost us time. Now we ask one question and get a summary with sources, people, and links. That keeps the investigation moving and reduces handoffs.

A photo of Griswold.

“Copilot helps scan noisy logs and points us to likely causes. Our old process of opening logs, interpreting opaque error strings, and validating a hunch took too long. Getting faster answers matters when incidents stack up.”

Michael Griswold, principal service engineering manager, Microsoft Intune

This also helps us during the coordination phase. We can surface the approver for a change, the engineer who ran the last mitigation, and the runbook section that explains the rollback steps. We make better decisions because we see the history and the intent, not just the current state. Then we line up the action in Intune with the right stakeholders already looped in.

How we use it

  • Asking for recent context on a device model, configuration, or app to see decisions and outcomes in one place.
  • Retrieving owners, approvers, and on‑call contacts named in Outlook and Teams messages related to the issue.
  • Pulling change notes and runbook updates tied to a policy or package before we request an update in Intune.

Prompts we rely on

  • “Summarize recent emails and Teams messages about <device model/app version> and list owners mentioned.”
  • “Find the change note or runbook update for <policy/package> from the last 14 days.”
  • “Show known issues linked to <KB/app> and who resolved the last occurrence.”

Why it helps

  • Less hunting: We replace ad hoc inbox and wiki searches with a single query.
  • Faster coordination: We identify the right stakeholders and prior decisions immediately.
  • Better decisions: We confirm history and context before proposing changes in Intune.

Best practices

  • Keep prompts scoped. Include product, version, and a timeframe to focus your results.
  • Respect boundaries. Align usage with least‑privilege access and existing approval and auditing paths.
  • Capture outcomes. Link summaries, owners, and key docs back to the incident record so future searches return richer context.

Note

  • Copilot gets better as more decisions and runbooks live in Microsoft 365, since that’s where the signals come from.

Tip 3: Accelerate log triage with GitHub Copilot, Visual Studio Code, and Log Analytics

We use GitHub Copilot in Visual Studio Code with Azure Monitor Log Analytics to explain errors, draft KQL, and shorten device log investigations.

“Copilot helps scan noisy logs and points us to likely causes,” says Michael Griswold, a principal service engineering manager with the Microsoft Intune product group. “Our old process of opening logs, interpreting opaque error strings, and validating a hunch took too long. Getting faster answers matters when incidents stack up.”

Now we keep the entire loop in one workspace. AI in GitHub Copilot interprets the event, proposes likely causes, and generates KQL to confirm or rule out scenarios. We move from symptom to validated pattern without bouncing across tools.

How we use it

  • Connect VS Code to your Log Analytics workspace and load the tables you need (e.g., inventory and update events).
  • Paste a minimal log sample with timestamps and device identifiers, so Copilot has context.
  • Ask Copilot to summarize the error, suggest probable causes, and produce KQL to test each path.
  • Run the query, review clusters and outliers, and request an alternate query or grouping if noise is high.

Prompts we rely on

  • “Explain this error in a device‑management context and list three validation checks.”
  • “Write KQL to find matching failures in the last 24 hours and group by model and policy.”
  • “Join device inventory with update events for device and surface anomalies.”

Why it helps

  • Faster pattern recognition: Proposed queries get us to evidence quickly.
  • Less context switching: Analysis and validation happen inside VS Code.
  • Cleaner handoff: Results map to our Intune actions for targeted remediation.

Best practices

  • Keep inputs tight: Provide a small, representative log snippet, the affected device attributes, and a precise time window.
  • Iterate on queries: Ask for different filters, joins, or time ranges when results are noisy.
  • Close the loop: Use the device list to drive policy or update changes in Intune and confirm fixes with a final query.

Note

  • This workflow is broadly repeatable with GitHub Copilot, Visual Studio Code, and Azure Monitor Log Analytics.

Tip 4: Keep firmware and drivers current with Intune update management

We use Intune firmware and driver update management to identify, approve, and deploy our OEM updates at scale.

“Staying current on firmware and drivers keeps devices stable and secure. With Intune, we stage updates, watch the rollout, and adjust before issues spread.”

Taqui Mohammad, senior service engineer, Microsoft Digital

Firmware and driver releases don’t land on a predictable schedule. Different vendors ship on different timelines, and a single environment can span hundreds of models.

Tracking this manually slows responses and leaves risk on the table. Intune centralizes the view so we can see what’s applicable, choose the right targets, and roll out updates with the same discipline we use for OS patches.

“Staying current on firmware and drivers keeps devices stable and secure,” says Taqui Mohammad, a senior service engineer in Microsoft Digital. “With Intune, we stage updates, watch the rollout, and adjust before issues spread.”

How we use it

  • Review applicability: Open the firmware and driver updates view to see available updates grouped by make and model.
  • Select a pilot: Target a small ring first (model, business unit, or region) and set short deadlines.
  • Plan time windows and restarts: Align deployments with maintenance windows and communicate expected reboots.
  • Monitor, then expand: Track success and failure signals, remediate issues, and scale to broader rings.

Configuration tips

  • Standardize categories: Separate firmware from drivers in policies so reporting and rollbacks are clean.
  • Use device tags consistently: Model, region, and business unit tags make scoping and expansion straightforward.
  • Define rollback steps: Document how to revert a driver or hold firmware for a specific model when needed.

Success checks

  • Compliance trend: Increased percentage of devices on the latest approved firmware and driver versions after each wave.
  • Incident correlation: Fewer support tickets related to device stability and peripherals on updated models.
  • Deployment reliability: Decreased failure rates as pilots catch issues before broad rollout.

Best practices

  • Pair with risk signals: Prioritize models tied to active vulnerabilities or incident clusters before broad rollout.
  • Keep rings small and fast: Validate quickly, then scale; long pilots hide issues and delay benefits.
  • Document exceptions: If a model needs a temporary hold due to app or peripheral compatibility, record the reason and set a review date.
  • Verify outcomes: Confirm update levels on target devices and scan for regressions in support queues.

Notes

  • Expect uneven arrival patterns across vendors and models; a weekly review cadence helps catch new updates without creating noise.
  • Treat firmware and drivers as first‑class updates; include them in regular compliance reports and reviews so they get consistent attention.
A photo of Rodriguez.

“Autopatch Update Readiness catches and resolves common blockers before deployment begins. What used to require manual checks and troubleshooting is now handled upfront, giving us smoother updates and a far more reliable experience for our employees.”

Dave Rodriguez, principal product manager, Microsoft Digital

Tip 5: Speed updates with Windows Autopatch, Hotpatch, and Auto Remediation Update Readiness

We use Windows Autopatch and Hotpatch to reduce disruptions and keep our devices current, and we pair them with automated readiness and remediation so our changes land safely and quickly.

Autopatch handles orchestration for quality updates and feature releases. We define rings that reflect business risk and user impact, then let the service pace deployments as health signals arrive.

“Autopatch Update Readiness catches and resolves common blockers before deployment begins,” says Dave Rodriguez, a principal product manager in Microsoft Digital. “What used to require manual checks and troubleshooting is now handled upfront, giving us smoother updates and a far more reliable experience for our employees.”

Where Hotpatch is available, we apply security updates without a reboot, which cuts downtime and helps us move faster on critical fixes. An automated readiness layer checks prerequisites, fixes common blockers, and confirms that devices are ready before rollout.

How we use it

  • Enroll eligible devices in Autopatch and map them to the right scope so ownership, reporting, and break‑glass procedures are clear.
  • Build rings that reflect business priority and user profiles (e.g., VIP laptops, frontline kiosks, engineering workstations, and lab devices).
  • Enable Hotpatch on supported SKUs and confirm policy alignment so security updates apply without restarts where possible.
  • Run readiness checks that verify update agent health, policy state, storage and battery requirements, VPN reachability, and available maintenance windows.
  • Auto‑remediate common blockers such as stale update caches, missing prerequisites, paused services, or conflicting policies before a device enters the next ring.
  • Start with small cohorts, monitor early signals like install rate and post‑update stability, validate rollback paths, then expand the scope deliberately.

Operational checks

  • Ring coverage ensures eligible devices are actually assigned to a ring and not stranded outside the managed flow.
  • App and driver smoke tests validate business‑critical apps, kernel drivers, and peripherals on pilot cohorts before broad rollout.
  • Safeguard holds and known‑issue tracking are able to watch for vendor or service flags, which can pause or throttle a ring until a fix is available.
  • Rollback readiness confirms who owns the decision, what steps they follow, and how telemetry proves the rollback succeeded on affected devices.

Why it helps

  • Continuous movement shortens exposure windows because healthy rings advance without waiting for a fixed date.
  • Fewer interruptions improve user experience, as Hotpatch removes the need for restarts on supported devices.
  • Higher success rates come from automated readiness and remediation, removing predictable failures before deployment.

Best practices

  • Use consistent device tags so rings map cleanly to models, regions, and business units, which keeps targeting and reporting trustworthy.
  • Keep pilots small and fast to find issues quickly, then scale once success criteria are met and rollback is validated.
  • Communicate maintenance expectations in plain language so users know timing, restart behavior, and how to report problems.
  • Pace by risk rather than calendar, advancing rings when health metrics and support signal quality are within thresholds.
  • Review deployment dashboards daily during rollout, adjust ring size or cadence when error rates rise, and capture lessons learned for the next wave.

Note

  • Hotpatch availability depends on your Windows edition and configuration, so confirm support and prerequisites as part of your scoping work.

Tip 6: Keep third‑party apps current with Intune Enterprise App Management

We use Intune Enterprise App Management to keep third‑party apps current without constant packaging work.

A photo of Arias.

“Third-party apps fall out of date fast, so we’re standardizing how they’re updated. We do that with Enterprise App Management, which gives us reliable packages and keeps us moving at a steady cadence.”

Humberto Arias, senior product manager, Microsoft Digital

Third‑party software drives real risk: version drift, silent installers change, and manual packaging pipelines break at the worst time.

With Enterprise App Management, we select from a managed catalog, set assignment and update rules, and let the service handle new versions as they ship. We spend our time on exceptions, not routine updates.

“Third-party apps fall out of date fast, so we’re standardizing how they’re updated,” says Humberto Arias, a senior product manager in Microsoft Digital. “We do that with Enterprise App Management, which gives us reliable packages and keeps us moving at a steady cadence.”

This approach also improves the user experience. Updates arrive in predictable windows and dependencies are handled in a timely manner. We avoid surprise prompts and failed installs that generate tickets. When we do need to pause or pin a version, we scope it cleanly and document the reason.

How we use it

  • Build a standard catalog that covers the common apps our users need and assign clear ownership for each title.
  • Configure update behavior to auto‑update.
  • Use rollout rings so pilots validate the installation success rate and app behavior before expanding to broad audiences.
  • Scope assignments with device tags such as model, region, or business unit to simplify targeting and reporting.
  • Monitor install and update status, investigate failures, and retry with adjusted timing or requirements when needed.
  • Capture exceptions for apps that need holds or custom steps and set review dates to revisit the decision.

Scenarios we run

  • Rapid response when a high‑risk CVE drops by prioritizing affected apps and moving them to the front of the update queue.
  • Version cleanup by removing outdated or duplicate installers so devices converge on a single approved release.
  • Conditional deployment for specialized teams by offering an app as available instead of required while still tracking adoption.

Why it helps

  • Less packaging toil because the catalog supplies current installers and metadata.
  • Faster patching for common apps because updates flow as they publish.
  • Better compliance reporting because versions and assignments are consistent across rings and groups.

Best practices

  • Keep an authoritative list of approved apps with owners, support notes, and rollback steps.
  • Coordinate maintenance windows for high‑impact apps so users can save work before enforced updates.
  • Require pilots for any app with add‑ins or drivers and validate workflows with real users before scaling.
  • Use uninstall assignments to remove unapproved or vulnerable software and block reinstallation where needed.
  • Document app‑level exceptions, including the rationale and a date to re‑evaluate.

Notes

  • Some apps need pre-install checks or post-install steps, so include scripts or detection rules where required.
  • Track license terms and usage for commercial titles so updates do not outpace entitlements.

Tip 7: Close the loop with Defender Vulnerability Management and Intune security tasks

We use Microsoft Defender Vulnerability Management with Intune to turn exposure insights into targeted actions that close risk fast.

“The Intune Vulnerability Agent gives us a clear list of issues by device and owner. It shortens our path from finding a problem to fixing it.”

Harshitha Digumarthi, senior product manager, Microsoft Digital

Incidents don’t end when we spot a CVE. They end when devices are fixed and verified.

Vulnerability Management gives us an AI-powered live inventory of devices, software, and configurations, then connects that inventory to known threats. It shows which versions run where, highlights misconfigurations, and explains why a device is at risk. We see the problem and the cause, not just a risk score.

“The Intune Vulnerability Agent gives us a clear list of issues by device and owner,” says Harshitha Digumarthi, a senior product manager at Microsoft Digital. “It shortens our path from finding a problem to fixing it.”

It also ranks what to fix first. Factors like severity level, exploit availability, active attacks, and business context all feed into the priority list, so that commensurate effort goes where it’s needed most. The service recommends specific actions such as updating, uninstalling, reconfiguring, or applying a policy as appropriate.

From there, it pushes the work into our change tools. Tasks flow to Intune, Autopatch, and Enterprise App Management so the remediation is traceable. Exceptions are tracked, including data on owners, compensating controls, and review dates. Closure is verified by watching exposure decrease and confirming the fix landed with the intended devices.

How we use it

  • Review exposure by CVE, software, and device group to see where risk concentrates.
  • Prioritize based on business impact, internet exposure, and privilege level so high‑value targets move first.
  • Select the fix that fits the issue, including app updates through Enterprise App Management, OS and quality updates through Autopatch or Hotpatch (where supported), firmware and drivers through Intune update management, or policy changes for configuration weaknesses.
  • Target the right scope using tags for model, region, and business unit so remediation lands where it’s needed.
  • Set deadlines and user experience settings that balance urgency with productivity.
  • Validate closure by rechecking exposure, confirming install success, and watching support signals for regressions.

What we monitor

  • Exposure trends over time, to prove that remediation is reducing risk.
  • Top vulnerable apps and models, so effort tracks where it matters most.
  • Noncompliant devices and owners, so follow‑ups are direct and accountable.
  • Exceptions that need compensating controls, documented rationale, and a review date.

Why it helps

  • Fewer handoffs because the same team that sees risk can initiate remediation.
  • Measurable outcomes because exposure and deployment data live in connected systems.
  • Consistent execution because rings, tags, and approvals follow the same patterns as other updates.

Best practices

  • Keep device tags authoritative so targeting and reporting stay reliable.
  • Use pilots even for urgent fixes to catch compatibility issues before broad rollout.
  • Link vulnerability records to Intune assignments so audit and learning loops are clear.
  • Communicate clearly with affected users about timing, restarts, and how to report problems.
  • Document exceptions with owners and expiration dates so temporary holds don’t become permanent.

Notes

  • Not every fix is an update, and some issues require a configuration change or feature disablement with clear rollback steps.
  • Least‑privilege access and standard approvals keep remediation fast without expanding risk.

Key takeaways

Our approach for managing devices and updates has changed. We shifted device and update management from manual hunting and ad hoc remediation to a connected loop that starts with a question and ends with verified resolution—reducing investigation time and speeding recovery.

A few lessons stand out:

  • Make natural language work by grounding it in trust. Natural language becomes a force multiplier when insights are drawn from authoritative data and access is tightly scoped.
  • Keep pilots small, fast, and intentional. Focused pilots surface issues early without slowing momentum or introducing unnecessary risk.
  • Standardize signals to build confidence. Consistent tagging and clear ownership make reports, deployment rings, and rollbacks easier to interpret and trust.
  • Control exceptions with discipline. Every exception requires a written rationale and a review date, ensuring temporary holds don’t become permanent policy.
  • Close the loop—every time. Verification matters as much as detection. We confirm outcomes and capture learnings to continuously improve the next cycle.

What we’re improving next:

  • Strengthen question‑to‑action flows. We’re deepening prompts and playbooks that connect Security Copilot and Intune so operators can move from investigation to scoped change in a single flow.
  • Expand Hotpatch adoption and measurement. As support broadens, we’re increasing usage and measuring the impact on downtime, reliability, and user experience.
  • Grow app coverage with clearer stability rules. We’re expanding Enterprise App Management while enforcing stronger version‑pinning guidance where predictability is critical.
  • Automate deployment decisions. Additional automation around ring placement, readiness checks, and rollback triggers will allow deployments to adapt to live health signals.
  • Accelerate investigations with reusable telemetry. We’re developing richer telemetry patterns and reusable KQL in Visual Studio Code to reduce noise and speed repeat investigations.

It’s a continuing evolution of our awareness and capabilities in device management, and we’ll keep improving on it, one loop at a time.

The post Read our seven tips for shifting to a ‘cloud native’ device management strategy appeared first on Inside Track Blog.

]]>
22433
A day in the life of a Microsoft employee using Copilot http://approjects.co.za/?big=insidetrack/blog/a-day-in-the-life-of-a-microsoft-employee-using-copilot/ Thu, 12 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22318 Meet Opeoluwa Burnett, a lead product manager in Microsoft Digital, the company’s IT organization. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic with experts from our Microsoft Digital team. Opeoluwa uses Microsoft 365 Copilot to turn her typically packed […]

The post A day in the life of a Microsoft employee using Copilot appeared first on Inside Track Blog.

]]>
Meet Opeoluwa Burnett, a lead product manager in Microsoft Digital, the company’s IT organization.

Opeoluwa uses Microsoft 365 Copilot to turn her typically packed schedule into productive and rewarding workdays. Here are some of her favorite prompts and practical tips for getting the most out of the AI assistant.

Start strong: Using Copilot Chat to start your day

Microsoft 365 Copilot Chat now opens automatically when Opeoluwa opens a new tab in her browser, and it also works in any Microsoft 365 app like Word, Excel, PowerPoint, OneNote, and Outlook. 

To start her day, she tells Copilot:

“Using my Teams chats, calendar, and email, summarize my activities for today. Provide me the information in a table, organized by priority, with clear and concise information.

If she’s been away for a while, she’ll use a more specific prompt, like this one:

“I’ve been on vacation. Summarize what I’ve missed from the last 7 days into bullet points of actions where I’ve been mentioned. Then, look through my email “Inbox” folder, Teams chats, and meeting recaps and transcripts. Create a table that includes: action item, deadline, who assigned it. Order all items in the table by the suggested priority in which they should be completed.

“Don’t forget to use the summarize feature to tackle those long threads. It makes it much, much easier to catch up with what everyone is talking about, and it helps make sure I don’t miss something.” 

Opeoluwa Burnett, senior product manager, Microsoft Digital

Inbox Zen: Using Copilot Chat to tidy your inbox

When Opeoluwa needs to get organized, she uses Copilot Chat to help her. She’ll write a prompt like:

“Give me a recap of the meeting series ‘Engineering team stand-up.'”

“Provide a comprehensive summary of my emails about (the FY26 AI adoption strategy).”

She suggests using the same technique for unpacking long, drawn-out conversations, the kind where it’s hard to know what’s up, what’s down, and what her action items might be.

“Don’t forget to use the summarize feature to tackle those long threads,” she says. “It makes it much, much easier to catch up with what everyone is talking about, and it helps make sure I don’t miss something.”

Focus on strategy: Using Copilot Chat to draft documents

When it’s time to do deep thinking and get core work done, Opeoluwa asks Copilot Chat for help getting started with prompts like this:

“Create a detailed product plan for a new feature. It’s a small, AI-powered desktop companion that helps workers manage distractions, track focus time, and suggest breaks.”

When she wants to start interacting with the text, she simply heads over to Word and opens up Copilot Chat to resume the conversation.

She is also able to do similar things in the desktop version with Action Mode in Word. Once she has a document open, she selects the Tools icon to enable Agent Mode. From there, she can ask prompt Copilot in the same way and watch it draft her document right before her eyes.

Lunch break: Use Copilot Search to see what’s happening near you 

If Opeoluwa visits the office, she can check in with chat to see what’s happening around her. In addition to discovering what’s for lunch at the café, she can explore fun things to do with prompts like:

“What’s happening on the Microsoft Commons campus today?”

“Are there any special events going on today?”

And while she’s at it, she can use the Employee Self-Service Agent to not only find out what’s on the menu at a café where she’s going, she can also use it to order lunch.

“Is anyone serving a taco salad at The Commons today, and if so, help me order one to pick up at 12:50 p.m.?”

Teamwork: Creating an agent to keep everyone aligned and up to date

When she needs to, Opeoluwa spins up an agent that she and her team can use to get their work done. For example, she might create an agent to automate processes that previously everyone had to slog through manually.

“Anyone can create an agent and share it, but remember that—for now—only the person who creates it can edit it,” she says. “Keep that policy in mind if you’re going to want to make changes and have the agent grow with you.”

End of the day: Wrapping up with Copilot Chat

Opeoluwa uses Copilot Chat to help her tidy up her day, making sure she didn’t miss anything important and that she’s ready for what tomorrow will bring. To wrap up, she uses a prompt like this:

“Using my Teams calendar, email, and recent chats, summarize my activities for today in a plain-text bulleted list.”

“More often than not, thanks to help from Copilot, I’m in a good position to pause, reflect, and get ready for the next day,” she says. “Copilot has really helped me get organized, focus on the right things, and be more effective.”

Key takeaways

Here is a quick summary of Opeoluwa’s methods for getting more out of your day with Microsoft 365 Copilot:

  • Start your day with clarity. Copilot Chat pulls your Teams chats, calendar, and email into a single prioritized view, so you can quickly understand your schedule and what needs attention that day.
  • Tackle long threads in seconds. Intelligent summaries from Copilot make it easy and fast to catch up on lengthy email chains, meeting series, and conversations without missing key decisions or action items.
  • Accelerate deep work. Use Agent Mode in Copilot Chat to draft documents, product plans, and structured outlines so you can spend more time refining ideas, instead of starting from scratch.
  • Stay connected on campus. Copilot Search helps you find people, files, and authoritative sources of information.
  • Create lightweight agents for your team. Build simple agents to automate repetitive processes and keep everyone aligned—just remember that the agent’s creator maintains edit control.
  • End the day organized. Copilot Chat generates a clean summary of your meetings, emails, and chats so you can reflect, wrap up, and prepare for tomorrow.

The post A day in the life of a Microsoft employee using Copilot appeared first on Inside Track Blog.

]]>
22318
Leadership at scale: The organic rise of a Viva Engage superstar at Microsoft http://approjects.co.za/?big=insidetrack/blog/leadership-at-scale-the-organic-rise-of-a-viva-engage-superstar-at-microsoft/ Tue, 03 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22132 Ravi Vedula didn’t know it was going to happen. He didn’t plan it. Yet somehow, he became a Viva Engage superstar. “Modern leadership is about kindness, it’s about empathy, and, frankly, it’s about authenticity. With Engage, I get to talk to my 1,000-plus-member team every day. I get to know them, and they get to […]

The post Leadership at scale: The organic rise of a Viva Engage superstar at Microsoft appeared first on Inside Track Blog.

]]>
Ravi Vedula didn’t know it was going to happen.

He didn’t plan it.

Yet somehow, he became a Viva Engage superstar.

“Modern leadership is about kindness, it’s about empathy, and, frankly, it’s about authenticity. With Engage, I get to talk to my 1,000-plus-member team every day. I get to know them, and they get to know me.”

Ravi Vedula, corporate vice president, IDEAS

When Vedula looks back at why he first started posting on Microsoft Viva Engage, it was because he wanted to connect with a team that was too big to meet with one-on-one. The team also spans multiple continents and time zones.

It was about being real and accessible.

“Modern leadership is about kindness, it’s about empathy, and, frankly, it’s about authenticity,” says Vedula, a corporate vice president of engineering who leads our IDEAS (Insights, Data, Engineering, Analytics, and Systems) team here at Microsoft.

“With Engage, I get to talk to my 1,000-plus-member team every day,” he says. “I get to know them, and they get to know me.”

That’s the “why” in this story.  

The “what” is how Vedula became a Viva Engage superstar—he is now arguably the most influential person on the platform here at Microsoft.

Connecting with your employees

For many leaders, the challenge isn’t deciding what to say to their team—it’s figuring out how to build visibility and trust with employees spread across regions and time zones. At Microsoft, that challenge has reshaped how leaders like Vedula think about communication, presence, and influence in a digital workplace.

Traditional leadership communication models based around periodic team meetings and carefully phrased messages don’t always scale. They can inform, but they rarely invite participation. And when communication stays unidirectional, leadership presence can begin to feel distant, even invisible.

A photo of Sitaram.

“What stands out about Ravi’s use of Viva Engage is how naturally he connects with his stakeholders. He brings technical depth without losing the human element, and this combination has expanded his reach across the organization.”

Murali Sitaram, corporate vice president of Viva Engage

That’s why more leaders are rethinking not just what they say, but how they show up. Leadership presence isn’t about volume or polish. It’s about showing up consistently, in ways that invite participation and dialogue.

The journey that Vedula took from new Viva Engage user to influencer offers a clear example of how leadership presence can scale without becoming impersonal.

“What stands out about Ravi’s use of Viva Engage is how naturally he connects with his stakeholders,” says Murali Sitaram, a corporate vice president of Viva Engage. “He brings technical depth without losing the human element, and this combination has expanded his reach across the organization.”

Sitaram says his colleague’s goal has always been about creating a dialogue with employees to give them a closer connection to their work and our company mission.

“Ravi does this with transparency and openness, qualities that are accentuated by his use of the Engage platform,” Sitaram says.

From broadcast to conversation

Vedula first started using Engage because he wanted a way to show up consistently for his team, to build trust, and be visible, and he wanted to do this without relying on one‑directional broadcasts or infrequent all‑hands meetings.

Engage gave him a place to speak directly and listen openly.

“Every Engage post is a statement of culture. It’s like a culture flare we’re sending up every time.”

Ravi Vedula, corporate vice president, IDEAS

What makes Vedula’s approach distinctive isn’t a communications plan or a carefully curated feed. It’s the decision to treat Engage as a place for conversation rather than corporate messaging. Instead of polished announcements, he uses storylines to recognize milestones, celebrate people, share personal anecdotes, reflect on lessons learned, and respond directly to employee comments.

Those everyday moments become signals—not just of what leadership values, but of how leaders can participate alongside their teams. “Every Engage post is a statement of culture,” Vedula says. “It’s like a culture flare we’re sending up every time.”

A photo of Mayans.

“Ravi’s use of Viva Engage shows exactly why we built deep integrations across Microsoft Teams and Microsoft 365. By sharing openly through storylines, he’s created a trusted channel that meets employees where they work. That’s the kind of leader‑driven communication ecosystem we envisioned.”

Jason Mayans, vice president of product management, Viva Engage

Today, he posts consistently—often multiple times a week with lots of engagement. His team doesn’t just read his posts; they respond to them.

Meeting employees where they work

Using Viva Engage to meet employees inside the flow of their daily work enable Vedula to be persistently present to his employees.

“Ravi’s use of Viva Engage shows exactly why we built deep integrations across Microsoft Teams and Microsoft 365,” says Jason Mayans, vice president of Product Management for Viva Engage. “By sharing openly through storylines, he’s created a trusted channel that meets employees where they work. That’s the kind of leader‑driven communication ecosystem we envisioned.”

When communication shows up in the flow of work, leadership presence shifts from an event to a rhythm—something employees experience regularly, not just during major announcements or all‑hands meetings.

Over time, that rhythm of consistent communication shapes culture.

A photo of Nguyen.

“Customers often ask what great leadership communication looks like in Viva Engage. Ravi is one of the examples I point to. His storylines feel real, relatable, and useful—and employees respond. It’s the kind of engagement I hope other leaders are inspired to model.”

Steve Nguyen, principal program manager, Viva Engage Customer Experience

Recognition feels authentic. Participation feels safe. Leadership becomes something employees experience consistently, rather than something reserved for formal moments. Comments and reactions aren’t afterthoughts; they’re part of the conversation—and part of how trust is built.

For Vedula, that visibility isn’t about scale for its own sake.

“It’s about ensuring people feel seen, heard, and connected to the work they do and the mission they support,” he says. “Consistent, everyday engagement reinforces that I’m present as a leader—I’m not just observing from a distance.”

Putting leadership principles into practice

From a customer perspective, Vedula’s journey offers a practical model for what effective leadership communication can look like with Viva Engage.

“Customers often ask what great leadership communication looks like in Viva Engage,” says Steve Nguyen, a principal program manager for Viva Engage Customer Experience. “Ravi is one of the examples I point to. His storylines feel real, relatable, and useful—and employees respond. It’s the kind of engagement I hope other leaders are inspired to model.”

What stands out in Vedula’s approach to using Viva Engage isn’t polish or personality.

A photo of Cirone.

“In the era of hybrid work, one of the biggest challenges in employee communications is helping leaders stay connected at scale. Viva Engage is a powerful tool to help leaders stay connected to their employees in the daily flow of work, rather than only during big events like an all-hands meeting.”

John Cirone, senior director of global employee and executive communications

It’s consistency, authenticity, and a willingness to treat communication as a shared space rather than a broadcast channel. Different leaders will bring different styles—but the underlying principles remain the same.

Crucially, Vedula’s approach doesn’t rely on personality or scale.

It relies on trust.

“In the era of hybrid work, one of the biggest challenges in employee communications is helping leaders stay connected at scale,” says John Cirone, a senior director of global employee and executive communications. “Viva Engage is a powerful tool to help leaders stay connected to their employees in the daily flow of work, rather than only during big events like an all-hands meeting.”

Leadership in digital spaces is cumulative. Each interaction either reinforces or weakens credibility. Scaling leadership isn’t about saying more—it’s about listening well and showing up consistently in the places where conversations already happen.

“I can’t be in every room… but the impact of what I’m saying can be felt in every room when I post on Engage.”

Ravi Vedula, corporate vice president, IDEAS

Vedula’s evolution from first Engage post to superfan favorite illustrates a broader lesson: when leaders treat communication as an opportunity for participation instead of a performance, employees feel empowered to help shape that culture.

Digital workspaces deserve the same care and intentionality as physical ones. The conversations that unfold there—day by day—send powerful signals about trust, recognition, and belonging.

As Vedula puts it, “I can’t be in every room… but the impact of what I’m saying can be felt in every room when I post on Engage.”

Key takeaways

Here are some insights that leaders can use to strengthen connection and trust in their organizations:

  • Leadership presence is harder, and even more important, at scale: As teams spread across locations and time zones, visibility and trust don’t happen automatically. Leaders need deliberate ways to stay present so employees continue to feel seen, heard, and connected to the work.
  • Dialogue builds trust faster than broadcast communication: Two‑way conversations signal openness and respect in ways one‑directional announcements cannot. When leaders invite participation and respond in the same spaces as their teams, trust compounds more quickly.
  • Consistent, everyday engagement reinforces culture: Recognition, reflection, and small moments of interaction add up over time. Regular participation helps culture become something employees experience daily, not just during major events or when reading formal communications.
  • Meeting employees in the flow of work increases connection: Communication that surfaces alongside the tools employees already use feels more natural and accessible. When leadership presence shows up in the flow of work, engagement becomes a habit rather than a special effort.
  • Leadership communication is most effective when practiced consistently and intentionally: Impact comes less from polish and more from purpose. Leaders who communicate thoughtfully and consistently—focused on connection rather than performance—create spaces where trust and participation can grow.

The post Leadership at scale: The organic rise of a Viva Engage superstar at Microsoft appeared first on Inside Track Blog.

]]>
22132