Agent Archives - Inside Track Blog http://approjects.co.za/?big=insidetrack/blog/tag/agent/ How Microsoft does IT Thu, 09 Apr 2026 16:06:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 137088546 Microsoft CISO advice: The importance of a written AI safety plan http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-the-importance-of-a-written-ai-safety-plan/ Thu, 09 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=23016 Yonatan Zunger, CVP and Deputy CISO for Microsoft, has spent his career considering complex questions with security and privacy while building platform infrastructure and solutions. His experience underpins his advice on how to build a safety plan for working with AI. First and foremost, his advice is to have a written plan. “Make it an […]

The post Microsoft CISO advice: The importance of a written AI safety plan appeared first on Inside Track Blog.

]]>
Yonatan Zunger, CVP and Deputy CISO for Microsoft, has spent his career considering complex questions with security and privacy while building platform infrastructure and solutions. His experience underpins his advice on how to build a safety plan for working with AI. First and foremost, his advice is to have a written plan.

“Make it an expectation in your organization that people will create safety plans and have them for everything,” Zunger says. “People get so excited about having clarity in front of them that they end up making much more systematic, careful plans, and the rate of errors goes down dramatically.”

Watch this video to see Yonatan Zunger discuss his advice for creating an AI safety plan. (For a transcript, please view the video on YouTube: https://www.youtube.com/watch?v=H5reZ0uw0EA

Key takeaways

Here are questions and ideas to consider as you create a safety plan for your AI systems:

  • Define the problem. What problem are you trying to solve? A simple and clear problem statement is always a great starting point before building anything, including an AI agent.
  • Outline the solution. What is the basis of your solution? Can you explain your solution to an end user? What does a developer or administrative user of your solution need to know about what it is and does?
  • List the things that can go wrong. What can go wrong with your solution? Creating this list is the first step to figuring out how to deal with those issues.
  • Document your plan. What is your plan to address identified concerns? Identify the process you will follow when something goes wrong.
  • Draft your plan early and update it as your solution matures. Your safety plan can be as simple as a list or outline and should evolve as you prepare to build your solution.
  • Get feedback and buy-in. When you review the plan with stakeholders and leaders in your team and organization, you may uncover risks or issues you had not thought of. You also build awareness and agreement on what to do when something goes wrong.
  • Make a template and build its use into your processes. This tip is for anyone who leads a team or influences process development. Encourage using a safety template in all your projects to bring clarity and structure to how you work with AI.

The post Microsoft CISO advice: The importance of a written AI safety plan appeared first on Inside Track Blog.

]]>
23016
Microsoft CISO advice: The most important thing to know about securing AI http://approjects.co.za/?big=insidetrack/blog/microsoft-ciso-advice-the-most-important-thing-to-know-about-securing-ai/ Thu, 02 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22863 Using AI comes with inherent risks. In a recent video, Yonatan Zunger, CVP and deputy CISO for Microsoft, suggests thinking about AI as a new intern will help you naturally take the right approach to AI security.  Zunger and his team focus on AI safety and security. They consider all the different ways anything involving […]

The post Microsoft CISO advice: The most important thing to know about securing AI appeared first on Inside Track Blog.

]]>
Using AI comes with inherent risks. In a recent video, Yonatan Zunger, CVP and deputy CISO for Microsoft, suggests thinking about AI as a new intern will help you naturally take the right approach to AI security. 

Zunger and his team focus on AI safety and security. They consider all the different ways anything involving working with AI can go wrong.

“An important thing to know about AI is that AI’s make mistakes,” Zunger says. “You already know how to work with systems that make mistakes, get tricked.”

Watch this video to see Yonatan Zunger discuss his advice for working with AI. (For a transcript, please view the video on YouTube: https://youtu.be/b1x6gDbSWVY. )

The post Microsoft CISO advice: The most important thing to know about securing AI appeared first on Inside Track Blog.

]]>
22863
Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success http://approjects.co.za/?big=insidetrack/blog/deploying-the-employee-self-service-agent-our-blueprint-for-enterprise-scale-success/ Thu, 12 Mar 2026 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22492 The case for AI in employee assistance The advent of generative AI tools and agents has been a game changer for the modern workplace at Microsoft. And one of the foremost examples of how we’re reaping the benefits of this agentic revolution is our deployment of our new Employee Self-Service Agent across the company. Thanks […]

The post Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success appeared first on Inside Track Blog.

]]>

The case for AI in employee assistance

The advent of generative AI tools and agents has been a game changer for the modern workplace at Microsoft. And one of the foremost examples of how we’re reaping the benefits of this agentic revolution is our deployment of our new Employee Self-Service Agent across the company.

Thanks to the power of AI, agents, and Microsoft 365 Copilot, our employees—and workers everywhere—are discovering new ways to be more productive at their jobs every day. Recent research shows that knowledge workers are increasingly seeing big gains from using AI tools for work tasks. According to our Microsoft Work Trend Index:

As an AI-first Frontier Firm, Microsoft is at the leading edge of a transformation that’s bringing this technology into all aspects of our workplace operations. With tools like Microsoft 365 Copilot providing “intelligence on tap,” we’re forging a human-led, AI-operated work culture that enables our employees to accomplish more than ever before.

Bringing AI to employee assistance

As part of this move to embed AI across our enterprise, it was a natural step for us to apply this burgeoning technology to a common pain point for us and many workplaces today—employee assistance.

Workers in organizations large and small face many common issues in their day-to-day jobs. Whether it’s a problem with their device, a question about their benefits, or a facilities request, our typical employee was often forced to navigate a bewildering array of tools, apps, and systems in order to get help with each specific task.

This confusion is reflected in research showing that most workers are dissatisfied with existing employee-service solutions.

76% of employees find it difficult to quickly access company resources.
58% of employees struggle to locate regularly needed tools and services.

Our studies show that most employees have trouble finding the appropriate tools and resources they need to address their workplace-related questions.

Realizing that this was an ideal opportunity for AI, we set out to develop a state-of-the-art agentic solution. At Microsoft Digital, the company’s IT organization, we partnered with our product groups to develop and deploy the Employee Self-Service Agent, a “single pane of glass” that employees can turn to any time they need help. The product is now broadly available in general release.

A photo of D’Hers.

“With this employee self-service solution, we’re shaping a new era in worker support. With AI, every interaction is intuitive, every resource is within reach, and help feels seamless—creating an experience that empowers our people and accelerates business outcomes.”

Because Copilot is our “UI for AI,” the Employee Self-Service Agent is delivered as an agent in Microsoft 365 Copilot. If your employees have access to Copilot, you can deploy the agent at your company at no extra cost. If your employees don’t have a Copilot license, they can access it via Copilot Chat if it’s enabled by your IT administrator.

For the initial development and launch of our Employee Self-Service Agent, we decided to provide agentic help in three categories: Human resources, IT support, and campus services (real estate and facilities). Every organization will have to make its own determination for which functions to include in their implementation. Note that the agent is inherently flexible and expandable; we plan to add additional capabilities, such as finance and legal, in the future.

We learned many lessons in the almost year-long process of developing and implementing the Employee Self-Service Agent across our organization worldwide. The goal of this guide is to pass on what we learned—including how we used it to provide value to our employees and vendors—to help you prepare for, implement, and drive adoption of your own version of the agent.  

“With this employee self-service solution, we’re shaping a new era in worker support,” says Nathalie D’Hers, corporate vice president of Microsoft Employee Experience. “With AI, every interaction is intuitive, every resource is within reach, and help feels seamless—creating an experience that empowers our people and accelerates business outcomes.”

Before you start: Developing your plan

As you embark on your Employee Self-Service Agent journey, make sure to establish a clear and structured plan. This was a critical step for us in our deployment, and we can say with confidence that it will help you avoid surprises and increase your chances of a successful outcome.

Based on our experience here at Microsoft, the below is a high-level outline of the steps you should consider as you prepare for deploying your agent.

1. Define prerequisites
Start by making sure that all foundational elements for the agent are in place.

  • Assign licenses to your employees who will interact with the agent. They will need Microsoft 365 Copilot or Copilot Chat.
  • Verify readiness by configuring your Power Platform environments, applying Data Loss Prevention (DLP) policies, and setting up isolation (limited and controlled deployment with guardrails in place) where needed.
  • Ensure connectivity with critical systems by confirming that you have appropriate APIs and connectors available and functioning for the essential workplace systems that your organization uses (e.g., Workday, SAP SuccessFactors, and ServiceNow).

2. Identify your core team and responsibilities
Successful implementation of the Employee Self-Service Agent requires collaboration across multiple roles and departments in your organization.

  • Business owners from the areas your agent will cover—such as human resources and IT support—can help you define requirements, priorities, success criteria, and telemetry needs.
  • Platform administrators, particularly for Power Platform and tenant/identity teams, can manage your technical configuration.
  • Content owners and editors are needed to identify the knowledge sources to surface in the agent, curate new knowledge sources, and maintain the data underpinning these sources on an ongoing basis.
  • Subject matter experts can provide important “golden” prompt and user scenarios that the agent should prioritize and answer accurately.
  • Compliance, privacy, and security leaders and their teams are needed to address risk considerations.
  • Support professionals can help build a structure for live agent escalation and ticketing operations (in situations where the agent is unable to provide a solution).
  • Focus groups of end users assist with validating requirements and scenarios, as well as help with testing the agent.

3. Establish a clear timeline
We found that creating a schedule for the creation, implementation, and adoption of the agent is crucial. This phased approach will help you maintain momentum and accountability over the duration of the project.

For example, here’s a rough implementation timeline that you might use to gauge your progress:

Gantt chart showing 15-week timeline with assessment, deployment, pilot launch, and rollout phases.

4. Articulate your vision

Communicate your rollout plan to your team, including timelines and phases, and adjust it based on feedback. Establish clear goals and meaningful success metrics to guide you and make sure your efforts are in alignment with your company objectives. (Note: You may want to consider key upcoming projects or events in your organization and link the agent roadmap to them. This will help you meet your project’s success criteria faster and encourage quicker agent adoption.)

5. Define your governance

This phase will allow you to define policies and standards and conduct a thorough content audit to ensure accuracy, relevance, security, and sustainability.

6. Implement your agent

This phase involves configuration and integration, followed by testing.

7. Roll out the agent while driving adoption and measurement

We advise deploying the Employee Self-Service Agent using a phased, or ringed, approach. We started with a small group of employees, then gradually rolled it out to larger and larger groups  before finally releasing it to our entire organization.

We encouraged adoption with internal targeted communications and promotional efforts. Careful measurement enabled us to track impact and optimize agent performance. This type of concerted change management allowed us to share the latest product developments with our employees and to keep them excited and engaged with the tool.

By investing sufficient time and effort in the planning phase of your deployment, you’ll create a strong foundation for a secure, scalable, and successful self-service agent experience.

Chapter 1: Governance means getting your data right

When a Microsoft employee enters a query into an AI chat tool like Microsoft 365 Copilot, they know that they may not receive an individualized response that is directly specific to their situation. They are aware that they might need to verify the answer they receive with further research and additional sources.

But when it comes to our company-endorsed self-service agent, the stakes are different. Our employees expect to receive accurate and personally relevant responses when they ask for help. This is particularly true for queries related to important personal details, like HR-related questions about leave policies or benefits.

A photo of Ajmera.

“People expect personally tailored and highly accurate answers, especially for HR moments that really matter. We designed the Employee Self‑Service Agent with that expectation in mind, pairing trusted data and deep personalization with strong governance controls so that privacy, security, and trust are built into every interaction.”

Although the Employee Self-Service Agent comes pretrained with basic HR and IT support data, we found that the quality of the responses that our employees receive is directly connected to the accuracy, currency, and depth of the information we provide to the tool. You’ll want to spend the necessary time and effort to make sure that your data governance process is well thought-out and thorough, so that your employees experience the best possible results.

“Employee self‑service has a higher bar than generic AI tools,” says Prerna Ajmera, general manager of HR strategy and innovation. “People expect personally tailored and highly accurate answers, especially for HR moments that really matter. We designed the Employee Self‑Service Agent with that expectation in mind, pairing trusted data and deep personalization with strong governance controls so that privacy, security, and trust are built into every interaction.”

Major considerations for governance

We learned that before you configure your agent, you need to establish guardrails that protect your data’s integrity and that build your employees’ trust. These considerations will form the backbone of your governance framework:

  • Managing requirements: Define what the agent must deliver and align your stakeholders on clear, prioritized goals and objectives.
  • Determining and managing resources: Ensure you have the right people, systems, and funding in place to support your full product lifecycle.
  • Data security: Protect your sensitive employee information with strong controls, compliant storage, and least‑privilege access.
  • User access: Establish who can use, administer, and update your agent, with appropriate permissions and guardrails.
  • Change tracking: Monitor your updates to content, configurations, and workflows so your agent always reflects your current policies.
  • Reviewing: Regularly evaluate your content’s accuracy, the agent’s performance, and your organizational fitness to help you keep your employees’ experience with the agent trustworthy.
  • Auditing: Maintain traceability for compliance, incident investigation, and quality assurance across all of your data flows.
  • Deployment control: Manage where, when, and how you roll out new versions of the agent to reduce disruption and ensure consistency.
  • Rollback: Prepare a fast, safe path to reverting your changes if something breaks.

We found that addressing these considerations early in the process creates a governance structure that is proactive rather than reactive, increasing the quality of responses and setting your organization up for success.

Architecture essentials

Understanding the architecture of our agent helped our governance teams make informed decisions about our configuration and integration. To do that, they needed to review and understand its key architectural components. You’ll need to do the same.

Here’s a list of the different architecture components that our team assessed, to help you get started on your own process:   

  • Topics: Structured intents (e.g., “view paystub”) that align to employee questions and drive consistent answers.
  • Domain packages: Pre-curated bundles for different agent segments (like HR and IT support) that provide reusable patterns, prompts, and integrations.
  • Knowledge sources: Documents, intranet pages, FAQs, and databases that ground responses in authoritative content.
  • Connectors: Secure integrations to systems of record (like Workday or SAP SuccessFactors) can help enable read/write operations. (Because the Employee Self-Service Agent was built with Copilot Studio, it has access to more than 1,400 different connectors.)
  • Instructions: Governance-approved rules and prompts that shape tone, guardrails, and escalation behavior.

Assessing and preparing your content

A key early governance step is to audit all relevant content in your knowledge bases. This process should include assessing, updating, and, if necessary, restructuring this information before it is ingested by the agent.

An important caveat here is that the agent’s ability to understand which policies and procedures apply to which employee relies on your content having consistent metadata, permissions, and content structure. We found that before feeding your data into the agent, you need to:

  • Inventory existing content: Your content will incorporate many different types, such as SharePoint pages, Microsoft Teams posts, PDFs, intranet articles, and knowledge-base documents. The goal of the inventory process is to identify content that is complete rather than outdated, duplicative, or siloed; if there are issues with the content, they should be addressed before loading into the agent.
  • Assign knowledge owners: The owners should be SMEs who can help validate, tag, and maintain the content going forward. Part of this process is training up knowledge owners to be able to prepare and maintain content in ways that make it easily consumable by both agents and people.
  • Structure content for discoverability: All your content needs to have accurate metadata, well-defined topic pages, and consistent naming so that the agent can surface the right information at the right time.

We found that completing a thorough content audit helps us ensure that the Employee Self-Service Agent isn’t just chatting—it’s delivering trusted, up-to-date answers that save your workers time and effort as they go about their day.

Be aware of tone and conversational flow

Providing vetted and well-structured data to the agent is important, but it’s not the entire battle. You’ll also need to make sure your agent is given clear guidance on conversational tone and instructions on what to do in specific scenarios.

Make sure you incorporate:

  • Global instructions: Define the agent’s voice, behavior, and escalation rules to ensure consistency and trust. 
  • Topic-level triggers: Map natural language phrases to specific workflows (such as “reset password” or “check PTO”) so the agent routes these common queries correctly.
  • Advanced knowledge rules: Prioritize which data sources to use in ambiguous scenarios, and define when the agent should ask clarifying questions.

Taking these steps gave our agent a better chance of being accurate, helpful, and aligned with our organization’s specific preferences.

Addressing common scenarios with “golden” content

Another vital aspect of your content audit is identifying the most frequently accessed information in each topic area.

A good example comes from the preparation of our IT support content for ingestion by the Employee Self-Service Agent. One of the focuses of this effort was on so-called “golden prompts:” the 20 or so topics that generate up to 80 percent of our employee queries (a version of the famous “80/20 rule”).

Our golden prompts are a curated set of scenarios that:

  • Represent our critical user workflows and edge cases
  • Possess clear, expected responses (golden responses)
  • Cover core functionality that must never break

We made sure that the agent was providing high-quality responses for these common scenarios—we recommend you do the same.

Including “zero prompt” content

Another important aspect of your content process should be to develop “zero prompts.” These are preconfigured prompts in the agent that the user can simply click on to get an answer for a common issue or request.

For example, if one of your employees wants to understand how to set up a VPN, they simply click on the zero prompt provided for that topic. The tool then gives them complete instructions on how to set one up.

During our deployment of the agent, one case where we prepopulated the tool with content for a specific, high-demand scenario came when Microsoft made a major announcement regarding employees returning to the office. We knew this policy change would generate a lot of questions from our employees.

In preparation for this, we asked Microsoft 365 Copilot to create a single document that pulled in all the “return to office” material found in its verified HR content database. We then made this document available to the agent. Just by taking that simple step, we saw our user satisfaction ratings in the tool jump from 85 percent to 98 percent for that issue!

In your own deployment, think about what issues and topics generate the most questions from your employees. You can then prepare specific content to address these scenarios, which will increase your chances of success with the agent.

Data security and compliance

Data security was a high priority when we developed our agent, especially because it must necessarily access sensitive HR information on a regular basis. During product development, we made sure that the agent adhered to enterprise-grade security standards, including identity federation, least-privilege access, and encrypted storage.

Because the agent is built on Copilot Studio, it supports robust data-loss prevention features. The agent also complies with regulatory frameworks like General Data Protection Regulation through built-in auditing and data-retention policies.

One of the big advantages that an AI agent has over a static website or similar data source is the ability to personalize responses for each user. At the same time, we had to make sure that the agent had guardrails in place to avoid overexposing sensitive information. This included detailed disclaimers to help call out these kinds of responses and flag them for more careful handling.

Our agent complies fully with our accessibility standards as well. Like all Microsoft products and services, the tool underwent a rigorous review to ensure it was fully accessible for all users.

Responsible AI

Whenever a new AI application is launched, there may be concerns raised about potential challenges regarding bias, safety, and transparency. That’s why the Employee Self-Service Agent follows the Microsoft Responsible AI principles by default.

When you enable the sensitivity topic in your agent, it screens all responses for harassment, abuse, discrimination, unethical behavior, and other sensitive areas. We tested the agent thoroughly for objectionable responses before it was launched to a broad internal audience at Microsoft.

In addition, the agent includes an emotional intelligence (EQ) option. This feature is designed to make responses more empathetic, context-aware, and relevant for diverse user audiences. It analyzes the conversation’s context and tailors the agent’s replies to ensure that users feel understood and valued throughout their session (which could be particularly relevant for any conversations related to sensitive HR topics, such as family leave). The EQ option is customizable and can be turned off by your product admins.

Key takeaways

The following are important considerations for data governance when you deploy your Employee Self-Service Agent:

  • Employee expectations regarding accuracy and relevance are high for employee self-service tools, which makes data governance a key aspect of your deployment.
  • Consider which data repositories are best to incorporate into your agent, and make sure they are up-to-date and well-structured. This process requires a thorough content audit.
  • Pay special attention to the so-called “golden prompts” that make up a large percentage of expected queries. The agent’s answers to these questions should be top-notch.
  • Restructuring content can improve response quality. When we anticipated huge interest in a particular topic, such as workplace policy changes, we restructured our content on that subject and saw a significant jump in user satisfaction.
  • Build your agent to meet or exceed high standards for data security, privacy, and Responsible AI. These are vital concerns for any product that has access to sensitive personal information.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 2: Implementation with intention

Deploying a powerful and versatile tool like the Employee Self-Service Agent is no simple task. It requires guidance and buy-in from top leaders at the company, as well as detailed planning and execution across disparate parts of your organization. Here, we identify some of the key steps that we took here at Microsoft that can help guide you when launching your own self-service agent.

Determine category parameters

One of the first major decisions around implementing the agent is deciding which business function—we call them agent starters—to choose for your initial implementation.

We recommend starting with HR support or IT help (we started with HR). Both agent starters can be deployed into a single Employee Self-Service Agent experience, but they must be deployed one at a time. 

So you know, we’ve built the Employee Self-Service Agent to be connectable with other first- or third-party Copilot agents, enabling a seamless handoff to these agents without having to navigate to other tools or interfaces.

Understanding your deployment steps

There were four essential stages involved in the deployment of our agent, each with multiple steps. Here’s a quick rundown that you can use at your company:

  1. Preparation for deployment
    • Establish roles: Define who will manage, configure, and support the tool, assigning responsibilities to ensure accountability during deployment.
    • Set up your environment: Prepare the necessary hardware, operating system, and network configurations so the agent can run smoothly.
    • Set up third-party system integration: Ensure your infrastructure can securely connect and exchange data with external systems that the agent will need to integrate with.
  2. Installation
    • Install the agent: Deploy the core Employee Self-Service Agent software on the designated servers or endpoints.
    • Install accelerator packages: Add any desired connectors that enable the agent to communicate with commonly used systems for HR, payroll, IT support, etc.
  3. Customization
    • Configure the core agent: Adjust default settings to align with your organization’s policies and workflows.
    • Identify knowledge sources: Specify where the agent will pull information from, such as internal knowledge bases or FAQs.
    • Provide common questions and responses: Add employee FAQs to improve the agent’s ability to respond quickly and accurately.
    • Identify sensitive queries: Flag questions and responses that involve confidential or regulated information to ensure they’ll be handled securely.
  4. Publication
    • Approve the agent: Complete internal reviews and compliance checks to confirm the agent meets your organizational standards before full rollout.
    • Publish the agent: Make the configured agent available to your employees in your production environment.

Customization

The Employee Self-Service Agent operates as a custom agent within Copilot Studio, using our AI infrastructure via the Power Platform. The agent is constructed on a modular architecture that allows you to integrate it with your own enterprise data sources using APIs, prebuilt and custom connectors, and secure authentication mechanisms.

To streamline this integration process, we provide a library of prebuilt and custom connectors through both Copilot Studio and Power Platform. Preconfigured scenarios include connecting to major enterprise service providers such as Workday, SAP SuccessFactors, and ServiceNow. (View the full list of connectors offered by Copilot Studio.)

These connectors facilitate data exchange with the following systems and other agents in this ecosystem:

  • HR information systems
  • IT systems management
  • Identity management
  • Knowledge base platforms

We found that third-party integrations require setup effort and technical expertise across stakeholders in your tenant. Be sure to get buy-in and involve all relevant departments that will be impacted.

Rollout: A phased approach

As previously noted, we started our agent with HR content and then added IT support (we later expanded to include campus services help as well). We rolled the agent out to different groups of employees and geographic regions around the world over the course of months, adding new knowledge sources to the different categories at each step along the way. This gave us an opportunity to gather user data and refine performance of the tool as we went.

Graphic shows the phased rollout of the Employee Self-Service Agent to Microsoft employees in different regions of our global workforce.
We executed a phased rollout of the Employee Self-Service Agent across different regions and countries at Microsoft. As we expanded the audience for the tool, we also added more categories, knowledge sources, and capabilities.

Adding campus support services required us to handle queries and tasks related to dining, transportation, facilities, and similar subjects. This was a challenging addition, because the facilities and real estate space—unlike the HR and IT support areas—doesn’t have many large service providers, which are easier to provide prebuilt connectors for.

One area that did lend itself to prebuilt connectors, however, was facilities ticketing.

Because many of our campus facilities vendors use Microsoft Dynamics 365, we were able to create an out-of-the-box connector in the agent for their ticketing process. You can take advantage of these kinds of preconfigured tools in your deployment.  

Key takeaways

Here are some things to remember when implementing the Employee Self-Service Agent at your organization:

  • Decide which starter agent you will deploy first. We recommend starting with a single agent covering one area (vertical), such as HR or IT support, and then expanding from there.
  • Consider a phased rollout to allow time to refine responses and ramp up the number of topic areas and knowledge sources installed in your agent.
  • Use the prebuilt connectors to make it easier to integrate the agent with your existing systems.We developed customized connectors for major HR and IT service providers and a Microsoft 365 Dynamics connector to integrate with our many facilities vendors around the world.

Learn more

How we did it at Microsoft

Further guidance for you

Chapter 3: Driving adoption by breaking old habits

Once upon a time, when our employees needed help with a technical issue or an HR question, they literally picked up the phone and called the relevant internal phone number. That quickly evolved into an email-centered system, where employee questions were sent to a centralized inbox that would then generate a service request. Still later, chat-based help was introduced.

Using AI to handle employee questions and service requests is a natural step in this evolution, as large-language models were built to parse vast data repositories and return the right information (often with the help of multi-turn queries and responses). And by encouraging self-service, an AI agent can help meet employee needs faster while saving the organization’s staffing resources for other needs.

But getting employees to change their habits and use a tool like the Employee Self-Service Agent wasn’t going to be as easy as just flipping a switch. Here’s how we handled this important change management task at Microsoft.

Adoption across verticals

A key principle that we learned during the adoption process was that 80% of our change management activities for the agent are applicable to all our verticals (whether it be HR, IT support, campus facilities, or another category). We didn’t need to reinvent the wheel each time we added to the topics that the agent covered.

This allowed us to create a change management “playbook” that we could use each time we expanded to a new category. So, while roughly 20% of the strategies we used were specific to that vertical, the vast majority were the same, which saved time as we moved through onboarding the different categories.

Leadership is key

To get our employees to change the way they ask for help, we found it essential to get the support of our key leaders, something we refer to as “sponsorship.”

We found that good sponsorship doesn’t just come from your central product, communications, or marketing groups. It is equally vital to invest in relationships with local leadership in different regions as you roll out the agent (especially in multinational companies like ours).

Local leaders understand the various regional intricacies—including language, functionality, and the rhythm of the business—that can help inspire their segments of the workforce to adopt a new tool, and then evangelize it to others in turn. Working closely with these kinds of sponsors will help you pull off a successful adoption campaign.

If you have works councils, be sure to seek out your representatives and solicit their feedback on your agent experience early on. You can help them understand how the agent was developed and trained, then address any concerns they raise.

We’ve found that once our works councils are made aware of the careful processes we go through to protect user privacy, and to ensure compliance with our Responsible AI standards, they become enthusiastic supporters and can help promote agent adoption. (Read more about our experience with our works councils and the Microsoft 365 Copilot rollout.)

Defining your messaging

Work with your internal communications team to come up with a well-planned messaging framework for your agent rollout. Based on our experience, it’s likely you’ll need to communicate across a wide variety of teams and organizations like HR, IT, facilities, finance, and so on.

It’s important to be clear about how you’re positioning the product for your employees. This will allow you to develop both overall messaging for general use, but also content tailored to specific teams or employee roles. The more sophisticated your messaging, the more likely it is to be effective in encouraging user adoption of the agent in their regular workflow.

Listening to feedback

As Customer Zero for the company, our employees are our best testers and sources of feedback during our product development process. The Employee Self-Service Agent was no different, and we continue to gather crucial feedback and user data throughout the internal adoption process.

Because the agent is a tool centered on helping your workers resolve challenges and get quick answers to questions, you’ll want to set up your own systems for capturing their feedback and make sure the agent is meeting a high-quality bar.

We found that setting yourself up for success when it comes to listening to your employees involves two major aspects: Developing and deploying a system for gathering employee sentiment about the product, and then creating a system for analyzing that feedback and funneling the findings back to your IT team.

Some of the types of feedback and methods we used to gather it during the development process included:

  • User-testing data
  • User satisfaction ratings
  • User surveys, interviews and other research
  • Voice of the customer (in-product feedback)
  • Pilot projects and focus groups (smaller segments of users)
  • IT support incidents
  • Usage data and telemetry
  • Community-based early adopter feedback (similar to our Copilot Champs community)
  • Social media feedback and comments

You can choose from among these options to set up your own feedback mechanisms, or come up with something customized to your implementation.

Calibrating your usage goals

Remember that the Employee Self-Service Agent is not an all-purpose AI tool like Microsoft 365 Copilot, which your employees might use a dozen times a day. Instead, they may only need assistance from HR or IT support, tools, and information sources a few times a week (or even less). Your usage targets should be calibrated accordingly.

At the same time, the more categories of assistance you add to the agent, the more your usage levels can grow—along with user expectations.

When we decided to add campus support (dining, transportation, and facilities-related needs and queries), one of the motivators was to provide information that users might need on a more regular basis. This addition helped us increase adoption and build daily usage habits for the agent among our employees.

Making the agent your front door for employee assistance

Your employees may have longstanding habits around the ways that they seek assistance, such as moving quickly to email a service request, or immediately engaging a live support technician. There might even be someone helpful in the office next to them that they lean on for IT support. We’re aware that breaking such habits can be a challenge.

That’s why we decided to change our own employee-assistance workflows. In the case of HR, we are planning to remove the option to email a centralized alias for help, which was the default in the past. This forcing function will instead prompt our employees to turn to the agent first for assistance, creating a “front door” for all our HR service requests.

For our IT support function, we are switching from a Virtual Agent chatbot to the Employee Self-Service Agent, which should provide users with a richer experience and a higher rate of resolution.

Of course, our main goal is for the agent to handle an employee’s issue without having to seek further assistance. But what happens when the agent cannot resolve their problem or handle their request? That’s why we’ve also implemented a “smooth handoff”—either to create a service request or connect the user to a live agent for specialized assistance.

There are three key steps in this process:

  1. The Employee Self-Service Agent can identify when the user has reached a point where they need to move to a higher level of assistance via a live agent or a service request. (Note that we also allow the employee to make that determination for themselves.)
  2. We then give them different options for how they want to connect to live support.
  3. When the employee is transferred to a live technician, the Employee Self-Service Agent is able to pass on the chat history from its session with the user. That way, the technician or staff support can quickly get up to speed on the situation, see what the employee has already asked about and tried, and start helping them immediately.

Enabling the employee to quickly and smoothly transition to a higher level of support without leaving the chat increases user satisfaction and makes them more likely to return to the agent the next time they need assistance.

Strategic outreach to employees

Of course your workers, like ours, are busy with their day-to-day job functions. They may be resistant to trying a new tool or going through special training on how to access employee assistance. Or they may just not know about it.

Because of our regionally phased rollout of the agent, email was one of the most effective tools we used to connect with specific audiences and make them aware of the tool. With specific email lists, we could make sure that only employees in that phase of the rollout were seeing the message.

A key aspect of getting our employees to adopt any new tool is reinforcement—the process of sustaining behavior change by providing ongoing incentives, recognition, and support. Some of the reinforcement strategies we used for the agent included:

  • Targeted communications: Emails and organizational messages invited employees to try the agent as they received access
  • Multi-channel campaigns: Promotion of the agent via portals, newsletters, digital signage, and more to keep it at the forefront of employee minds
  • Training: Workshops and micro-learning sessions about the agent
  • Social campaigns: Posts highlighting the tool to increase awareness and gather employee feedback (see details below)
  • Leadership support: Managers modeled usage of the agent and promoted it regularly
  • Processes: The tool was part of regular employee workflows
An example of a fun Viva Engage post that our internal communications team created to encourage daily usage of the Employee Self-Service Agent during the holiday season.

One very important communications channel that we used in our adoption efforts was Microsoft Viva Engage. We set up a private Engage community for the Employee Self-Service Agent, then populated it with each new wave of users as they were given access to the tool (eventually all were given access when the tool went companywide).

We used this channel for various kinds of messaging:

  • General product awareness
  • Updates on new or changing functionality
  • Answering questions or addressing frustrations (two-way dialogue between users and the product team)
  • Fun and helpful “tips and tricks” that users could try (these could come from the product team, leadership, or individual product “champions”)

We also inserted messages about the new agent into our regular communications with different audiences, including HR professionals, IT support personnel, and internal comms staff at the company. And we regularly messaged company leaders about it, so they could encourage their teams and direct reports to support the effort and evangelize for the tool.

One thing we did was make clear to our employees that even though the agent was not able to handle an issue today, it might be able to in a month or two. That’s why ongoing communications to users was important.”

Prerna Ajmera, general manager, HR digital strategy and innovation

Of course, as a natural language chat tool, the Employee Self-Service Agent doesn’t require formalized training. The product itself is designed to guide users and allow them to experiment, simply by stating their needs in plain language. Most employees will already be familiar with AI tools like Microsoft 365 Copilot, so effectively using an AI-powered employee-assistance agent should be a low bar to clear.

Managing expectations

Your Employee Self-Service Agent rollout will be an ongoing journey as you add topic areas, functionalities, and other product features. Your product roadmap will evolve as you learn more about what your employees need with this kind of AI solution.

One factor to consider is how to set realistic user expectations about what the agent can do while the product matures and improves. As we gradually rolled out the tool, we messaged that the agent was in “early preview,” which helped avoid employee disappointment when it couldn’t handle a specific request.

“One thing we did was make clear to our employees that even though the agent was not able to handle an issue today, it might be able to in a month or two,” Ajmera says. “That’s why ongoing communications to users was important, as new capabilities were added and speed and accuracy improved.”

We also created messaging for early users indicating that their testing was an integral part of making the tool more effective. This created a positive feedback loop while also keeping employee expectations reasonable.

How we measured success

Carefully tracking and analyzing your success metrics throughout your development and release of the product is a high priority. Without this step, you are working in the dark.

At Microsoft, we identify the key performance indicators (KPIs) for a particular product and then use them as our North Star for any internal release. But the specifics of those KPIs can vary from product to product.

Graphic shows the improved success rates that employees have when seeking assistance from the Employee Self-Service Agent versus traditional support channels.
Early results from our internal deployment of the Employee Self-Service Agent showed marked increases in success rates when users sought assistance from an AI tool as compared with existing support channels.

For example, measuring the monthly average user (MAU) statistics might be extremely important for an all-purpose productivity tool like Microsoft 365 Copilot. But for an employee-assistance tool, the goal is not necessarily regular use, because employees aren’t constantly facing challenges that require help (we hope). Usage statistics may also be affected by certain events or cyclical needs, such as annual employee reviews or a major technology change (like a significant Windows update).

With this in mind, we identified certain key metrics for the Employee Self-Service Agent. In this case, the top KPIs included:

  • Percentage of support tickets deflected
  • Net satisfaction score
  • Latency period
  • Reliability
  • Total time savings
  • Total cost savings
  • Identified and prioritized issues (reported back to product group)

Overall, we focused on the rate at which employees were able to resolve issues without opening a support ticket, as this would likely generate the greatest return on time and cost savings. We came up with an overall target across the different verticals of 40% ticket deflection, and we’re making solid progress toward this goal as we continue to refine and improve the agent.

Part of our measurement process is a monthly progress meeting of key project stakeholders, where all KPIs are evaluated to see if our targets are being met. If the results do not meet expectations, we identify the potential causes and discuss what adjustments need to be made to address these shortfalls.

Key takeaways

Here are some key things to remember when it comes to adoption efforts for your Employee Self-Service Agent:

  • Don’t reinvent the wheel. Most of your change management and adoption strategies for the agent will be the same across different regions and help categories.
  • Line up product sponsors. Finding leaders and others across the organization to help you promote the Employee Self-Service Agent within their own groups, functions, and regions can make a big difference in gaining employee trust and encouraging adoption.
  • Set up proper listening channels. You’ll want to gather as much feedback as possible from your employees as you roll out the agent so you can understand what is working well and what needs improvement. This kind of feedback loop can also make your employees feel heard and help them shape the tool.
  • Make the shift to agent-first help. Employee habits for seeking assistance can be resistant to change. We decided that turning off the “email to create a service ticket” workflow was a great way to nudge our workers to recognize the agent as the first option for their assistance needs.
  • Be strategic in your communications. Use tools like email, Viva Engage, and other appropriate communications channels to target your communications and encourage a two-way conversation with employees about the agent. Sharing fun tips and encouraging peer support are other ways to increase awareness and engagement with product.
  • Identify your key metrics. We determined our benchmarks for success for this particular type of agent, then tracked them and made the results available to key stakeholders. This allowed us to measure the impact and effectiveness of the product.

Learn more

How we did it at Microsoft

Although some of the blog posts below are about adoption efforts related to Microsoft 365 Copilot, they can give you ideas on how we promote internal adoption of agentic AI products at Microsoft.

Further guidance for you

Begin your journey with the Employee Self-Service Agent

Agentic AI offers incredible promise to transform employee productivity, giving individuals access to powerful tools that enable them to accomplish more. We believe the Employee Self-Service Agent is another step along that path, allowing workers to get instant help with tasks that used to be cumbersome and time-consuming.

Photo of Fielder

“We’re excited to get the Employee Self-Service Agent out and into the hands of our customers, so that they can reap the same benefits that we’re already seeing from it. As we continue to refine the product and expand the number of verticals it can cover, we expect to realize exponential efficiency gains and capture even more cost savings across our entire organization.”

Now that you’ve read about our experience deploying the tool, it’s time to start your own journey. Successful implementation means your people will spend less time on the phone with support staff or hunting through web pages and other resources for help with routine employment tasks and more time devoted to their productive work, reducing job-related pain points and frustrations.

You can benefit from the lessons we’ve learned and the many helpful features and capabilities that we’ve built into this product, all of which are designed to make your implementation as fast, easy, and effective as possible.

“We’re excited to get the Employee Self-Service Agent out and into the hands of our customers, so that they can reap the same benefits that we’re already seeing from it,” says Brian Fielder, vice president of Microsoft Digital. “As we continue to refine the product and expand the number of verticals it can cover, we expect to realize exponential efficiency gains and capture even more cost savings across our entire organization.”

Key takeaways

Here are some of the essential top-level learnings we gleaned from our deployment of the Employee Self-Service Agent, which you should keep in mind as you start out on your own deployment path:

  • Identify and engage the right people. You’ll need buy-in and advocacy from leaders across the organization; the involvement of key stakeholders from HR, IT, legal, and compliance; and technical guidance from admins, license administrators, environment makers, and knowledge-base subject matter experts.
  • Develop your plan. Understand the major phases of governance, implementation, and adoption of the tool, and make sure that you have adequate resources and support for each phase.
  • Verify the quality of your content. Your chances of success will be better if you undertake a thorough content assessment to address the currency, accuracy, and structure of all relevant knowledge bases. Pay particular attention to the topics and tasks that are in greatest demand by employees when they access help services.
  • Consider a phased rollout. Releasing your Employee Self-Service Agent to progressively larger groups of workers across your organization allows you to gather data and feedback and improve the performance and relevance of the agent over time. You can also expand the number of categories that your agent covers as you go, increasing the impact and appeal of the tool.
  • Communicate strategically to promote adoption. Convincing employees to break longstanding habits when seeking help is a challenge. Email is helpful for targeting specific groups of employees, but be sure to use tools like Viva Engage to create community, answer questions, provide fun tips and tricks, and announce new capabilities and options.
  • Set clear goals and measure against them. Come up with a targeted set of KPIs that reflect your organization’s needs and aspirations, then develop a plan to capture data for each of these indicators and a regular reporting cadence to keep stakeholders informed of progress toward your goals.

Learn more

How we did it at Microsoft

Try it out

We’d like to hear from you!

The post Deploying the Employee Self‑Service Agent: Our blueprint for enterprise‑scale success appeared first on Inside Track Blog.

]]>
22492
Shaping AI management at Microsoft with Agent 365 and Copilot controls http://approjects.co.za/?big=insidetrack/blog/shaping-ai-management-at-microsoft-with-agent-365-and-copilot-controls/ Mon, 09 Mar 2026 13:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22560 AI is moving fast at Microsoft. Every month, we’re discovering new ways that our employees are using Microsoft 365 Copilot and rapidly emerging agentic tools to work smarter, automate routine tasks, and unlock new patterns of productivity. As our ecosystem of AI tools expands, so does our responsibility and opportunity. We have to guide the […]

The post Shaping AI management at Microsoft with Agent 365 and Copilot controls appeared first on Inside Track Blog.

]]>
AI is moving fast at Microsoft. Every month, we’re discovering new ways that our employees are using Microsoft 365 Copilot and rapidly emerging agentic tools to work smarter, automate routine tasks, and unlock new patterns of productivity.

As our ecosystem of AI tools expands, so does our responsibility and opportunity. We have to guide the process with the right structure, clarity, and confidence.

A photo of Fielder.

“With Agent 365, IT leaders can confidently embrace this innovation through a unified control plane that provides the capabilities that enterprises need to ensure agents are governed, observable, and secure—regardless of which tools, frameworks, or models were used to create them.”

Brian Fielder, vice president, Microsoft Digital

We approach the governance of AI as a task we’re shaping in real time while observing the different ways our people are using AI in their daily work.

That’s the advantage of being Customer Zero here in Microsoft Digital, the company’s IT organization. We’re living this transformation across Microsoft 365 every day, evolving our governance model alongside the evolution of AI and agents.

“With Agent 365, IT leaders can confidently embrace this innovation through a unified control plane that provides the capabilities that enterprises need to ensure agents are governed, observable, and secure—regardless of which tools, frameworks, or models were used to create them,” says Brian Fielder, vice president of Microsoft Digital.

Our governance approach is built around two complementary control planes: Microsoft Agent 365 for agents and Copilot controls for Microsoft 365 Copilot.

A photo of Johnson.

“We’ve seen the rapid pace of innovation firsthand. As Copilot evolves and agents expand, the control planes we use must evolve also. New AI and agent capabilities raise the bar for governance and management, so at Microsoft Digital, we’re working with our product teams to evolve the management to keep the company secure, informed, and ready for whatever comes next.”

David Johnson, principal architect, Microsoft Digital

These control planes are supported by the four fundamental concepts that we apply to every enterprise system we operate: security, governance, management, and observability.

“We’ve seen the rapid pace of innovation firsthand,” says David Johnson, principal architect in Microsoft Digital. “As Copilot evolves and agents expand, the control planes we use must evolve also. New AI and agent capabilities raise the bar for governance and management, so at Microsoft Digital, we’re working with our product teams to evolve the management to keep the company secure, informed, and ready for whatever comes next.”

This model gives us a consistent way to support new capabilities, encourage responsible experimentation, and help our employees adopt AI and agents with fewer hurdles.

Expanding our AI governance practices

As AI use evolves within our organization, we’re seeing clear patterns emerging. Copilot goes well beyond chat. It can execute tasks, create and modify content directly inside apps, connect systems, and coordinate multi‑step work through agents. The AI ecosystem is becoming more effective at boosting productivity with model choices, agent-to-agent orchestration, and agent mode within applications that leverage natural language to complete tasks.

These patterns are exciting, move fast, and expand how we think about governance.

The shift became clear as teams across Microsoft began experimenting with new AI capabilities in the last few years. Accelerating Copilot usage showed us how quickly people adopt tools to help them work better and faster. Rapid agent growth showed us how much value workers get when AI takes on more complex, multi‑step tasks. These expansions pushed us to evolve our security, governance, and management approaches alongside the technology.

That’s what led us to define two complementary control planes for Copilot and agents—not because one replaces the other, but because they serve complementary roles in the ecosystem. Copilot goes beyond chat, surfacing intelligence directly inside apps, workflows, and context to help people work smarter in the flow of their apps. Agents take on broader responsibilities across services, teams, and data boundaries.

By recognizing the different types of work that Copilot and agents do, we’re better equipped to manage and govern them. We can apply consistent principles, tailor the controls to each type of tool, and give employees a clearer understanding of how each AI capability behaves. It’s an approach that grows with technology, instead of forcing everything into a single frame.

Building governance on foundational pillars

As Copilot and agents expand across Microsoft 365 and the rest of our product offerings, we’ve anchored our approach on the fundamentals of security, governance, management, and observability. These principles have shaped our enterprise systems for years. What’s changing is how we apply them to a fast‑moving AI ecosystem.

Security and governance

Security and governance are the baseline for us at Microsoft. Every new capability—whether it’s Copilot helping you draft, find, or create content, or an agent running an automated workflow—must adhere to security and governance principles.

A photo of Powers.

“The Microsoft 365 admin center is becoming the place where controls come together. Policies, observability, and configuration are in a single experience, so admins don’t have to hunt across multiple portals. That consolidation makes it easier for us to understand how AI is behaving in our tenant and what controls we have available to guide it.”

Mike Powers, senior systems engineer and AI admin, Microsoft Digital

Products like Microsoft Purview and Defender allow us to better understand what data our AI tools are accessing, for how long, and where additional guardrails might be needed as features and usage evolve.

Management

Management completes the foundation, and measurement is how we track our progress.

As AI tools take on more responsibility, we needed a unified way to manage access, lifecycle, and configuration. Agent 365 is evolving the Microsoft Admin Center to serve as a central focal point for agent management and observability. Agent 365 brings together agent information and controls that were previously scattered across different admin experiences and puts them in one coherent place.

“The Microsoft 365 admin center is becoming the place where controls come together,” says Mike Powers, a senior systems engineer and AI admin in Microsoft Digital. “Policies, observability, and configuration are in a single experience, so admins don’t have to hunt across multiple portals. That consolidation makes it easier for us to understand how AI is behaving in our tenant and what controls we have available to guide it.”

It’s how we track adoption, quality, and business value like time saved and reduction in operational costs. It’s how we identify what’s working, where to invest next, and how we can guide product teams with real‑world insights. We look carefully at active agents, usage patterns, assisted hours, sentiment, and the outcomes our people achieve with AI. Different audiences share the same goal: using telemetry to make AI better.

Together, these principles allow us to evolve our governance model without slowing innovation. They give us a steady foundation in a rapidly expanding environment—one where Copilot and agents will continue to grow, intersect, and unlock new ways of working.

Observability with Microsoft Agent 365

The widespread use of agents is an accelerating trend here at Microsoft. We use them to automate multi‑step tasks, build applications in plain language, connect systems, and streamline work that previously depended on manual coordination.

As the number of agents grows and becomes more autonomous, we need a management approach that matches their scale and autonomy. That’s what Microsoft Agent 365 gives us—a control plane designed for AI and agentic workloads that operate across platforms and traditional admin boundaries.

Agent 365 provides a registry for agents that lets us discover and understand how agents behave across Microsoft 365. It shows us who built them, who can use them, and what data they can access. From a single admin console, we can observe and manage agents created across different platforms. Day to day, Agent 365 gives AI admins agent observability we didn’t have before, and a way to connect insight to action.

“Agents represent a significant and growing workload that tenant administrators manage as part of day‑to‑day operations,” Powers says. “Agent 365 helps bring clarity to a diverse and rapidly scaling agent population by providing a centralized place to observe and manage how agents operate. This centralized approach is bringing together admin teams like never before so we can apply broad expertise to agent management.”

That clarity matters.

Agents behave differently than Copilot experiences. They can run continuously, trigger processes automatically, and touch systems across organizational boundaries. By treating them as advanced workloads, we can apply governance that supports experimentation without losing control over the ecosystem.

Agent 365 gives teams the confidence to build agents, knowing there’s a clear, consistent framework behind them. It helps ensure agents scale responsibly, are discoverable, and align to the enterprise patterns that keep Microsoft secure and productive.

Keeping track of Copilot controls

We rely on Copilot controls to give us a unified way to govern how different Copilot experiences show up for employees.

Copilot controls aren’t a single product. It’s a fabric of controls, insights, and guardrails that help us guide Copilot usage as it grows. It brings together settings, reports, and policies that once lived across separate admin surfaces and connects them into one coherent system.

A photo of Ceurvorst.

“Copilot controls bring everything into one place, so admins don’t have to jump across different reports. It gives them a holistic view of Copilot health. That includes licenses, sentiment, usage, and recommendations. It’s everything they need to understand how Copilot is working in our tenant.”

Amy Ceurvorst, direct of business programs, Microsoft Digital

At its core, Copilot controls help us manage three things:

  • Who has access
  • How the experience is configured
  • How we measure adoption and value

It’s how we track whether licenses are assigned as expected, whether teams are using Copilot regularly or occasionally, and where configuration gaps may exist. It also recommends changes that can make Copilot more effective and secure.

As Copilot evolves, our Copilot controls will evolve with it. New features, security patterns, and use cases all plug into the same foundation. That gives admins a rhythm they can rely on, even as the technology continues to move rapidly.

It also gives business leaders clearer visibility into how Microsoft 365 Copilot helps people work—how often it’s used, what tasks it supports, and where impact shows up.

“Copilot controls bring everything into one place, so admins don’t have to jump across different reports,” says Amy Ceurvorst, a director of business programs in Microsoft Digital. “It gives them a holistic view of Copilot health. That includes licenses, sentiment, usage, and recommendations. It’s everything they need to understand how Copilot is working in our tenant.”

That clarity is critical. It helps us guide Copilot responsibly without slowing its momentum. It gives our admins confidence in how the experience behaves. It gives our engineering teams the feedback they need to keep improving the platform. And it gives our employees a secure, well‑governed environment where they can adopt Copilot at their own pace.

Applying Agent 365 and Copilot controls as Customer Zero

We use Agent 365 and Copilot controls every day. They help us understand what AI is doing inside Microsoft, how these tools are evolving, and where we need to focus our efforts next.

These systems give us visibility we didn’t have a year ago, as well as a way to move faster without losing alignment across security, IT, and business teams.

A photo of Roberts.

“Measurement tells us what’s really happening. It shows us where people are finding value and where they need help. We can see the friction points, the successful patterns, and the opportunities that aren’t obvious from the surface. Having that level of insight lets us give the product team clear, actionable feedback.”

Tanya Roberts, senior business program manager, Microsoft Digital

Understanding how agents perform in the real world is essential. With Agent 365, we look at what’s being created, what’s actively being used, and which workflows people rely on most. We review how agents are scoped and published, and we check whether they’re operating as expected. These signals help us see emerging patterns—what’s gaining traction, what’s causing confusion, and where we need clearer controls.

The same applies to Copilot.

Copilot controls give us a consolidated view of how Copilot appears across the tenant—licenses, usage, sentiment, and recommended configuration changes. We use that data to advise product groups, flag issues early, and help business teams to adopt Copilot in ways that make sense for their work. Internally, these insights reduce friction. Externally, they help shape the product.

Cross‑team collaboration is essential. Security teams watch for data exposure risks. IT teams manage configuration and rollout. Business units surface scenarios they want to enable. We coordinate across all these groups so Copilot and agents can scale smoothly.

Measurement ties it all together.

“Measurement tells us what’s really happening,” says Tanya Roberts, a senior business program manager in Microsoft Digital. “It shows us where people are finding value and where they need help. We can see the friction points, the successful patterns, and the opportunities that aren’t obvious from the surface. Having that level of insight lets us give the product team clear, actionable feedback. We can connect the dots between what people are trying to do and what the technology needs to support next.”

This is how we make AI real and practical. We learn from what happens in production, evolve the controls, and feed those lessons back into the product. It’s an ongoing cycle that grows stronger as adoption increases.

Looking forward

The AI landscape isn’t slowing down. Copilot will keep getting smarter and more broadly used across other apps and services. Agents will take on more complex work. And the boundaries between them will continue to blur as new capabilities emerge across Microsoft 365. That’s why our governance model has to evolve alongside the technology.

We’re designing for a future where AI spans more systems, touches more data, and supports more business processes. That means deeper integration between Agent 365 and our Copilot controls; more connected signals across security, management, and measurement; and governance patterns that hold up no matter how AI capabilities shift.

We expect the control planes we use will continue expanding in ways that give admins even more clarity. We’re looking forward to seeing richer telemetry across Copilot and agents. We plan to develop simpler ways to scope, publish, and update AI workloads. And we anticipate more advanced governance features, which will help organizations understand not just what AI is doing, but why it’s doing it.

Our work with Microsoft product teams as Customer Zero will continue to shape this evolution. As part of this process, we can provide real‑world insights about how AI behaves at enterprise scale. That feedback is already influencing how controls show up in the Microsoft 365 admin center and how Agent 365 is expanding to support new workloads. These feedback loops will only get stronger over time.

We’re building our AI management approach into a living system that adapts to new capabilities, new risks, and new opportunities. A system that supports innovation instead of slowing it down. And one that keeps Microsoft—and our customers—confident as the AI stack keeps changing.

Key takeaways

If you’re establishing governance for Copilot and AI agents in your organization, consider these actions to drive responsible, scalable adoption:

  • Start with governance fundamentals. Use security and governance, management, and observability as your pillars before layering in other tools or processes. Many of the same fundamentals that unblock Copilot provide the reason why a tenant can be comfortable with knowledge-only agents. 
  • Understand the unique and intersecting governance paths for Copilot and agents. Both have some of the same fundamentals but Copilot and agents have distinct AI controls, with different responsibilities, risks, and oversight needs.
  • Use measurement to guide decisions. Track usage, value, sentiment, and friction to understand how AI is performing and where you need to refine the experience.
  • Make governance a shared responsibility. Bring together security, IT, business leaders, and product teams to ensure clarity, alignment, and end‑to‑end control.
  • Design governance that evolves. Adopt controls that can adapt as Copilot grows, agents mature, and new AI capabilities enter the stack.
  • Prioritize clarity for builders and admins. Keep patterns simple, make guidance visible, and ensure that controls are easy to understand so your teams can adopt AI confidently.
  • Invest in the AI admin role. Create space for a dedicated AI admin role and skill up AI Admins with deep, cross‑platform expertise, including SharePoint, Power Platform, Azure AI Foundry, Entra identity, and Exchange. Yes, agents will soon have their own mailboxes. In the evolving world of agents, effective administration depends on knowing how agent lifecycle is tied to the platforms where they are created and operate. 

The post Shaping AI management at Microsoft with Agent 365 and Copilot controls appeared first on Inside Track Blog.

]]>
22560
Protecting AI conversations at Microsoft with Model Context Protocol security and governance http://approjects.co.za/?big=insidetrack/blog/protecting-ai-conversations-at-microsoft-with-model-context-protocol-security-and-governance/ Thu, 12 Feb 2026 17:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22324 When we gave our Microsoft 365 Copilot agents a simple way to connect to tools and data with Model Context Protocol (MCP), the work spoke for itself. Answers got sharper. Delivery sped up. New patterns of development emerged across teams working with Copilot agents. That ease of communication, however, comes with a responsibility: Protect the […]

The post Protecting AI conversations at Microsoft with Model Context Protocol security and governance appeared first on Inside Track Blog.

]]>
When we gave our Microsoft 365 Copilot agents a simple way to connect to tools and data with Model Context Protocol (MCP), the work spoke for itself.

Answers got sharper. Delivery sped up. New patterns of development emerged across teams working with Copilot agents.

That ease of communication, however, comes with a responsibility: Protect the conversation.

Questions came up like, who’s allowed to speak? What can they say? And what should never leave the room?

Microsoft Digital, the company’s IT organization, and the Chief Information Security Officer (CISO) team, our internal security organization, are leaning on those questions to help us shape our strategy and tooling around MCP internally at Microsoft.

A photo of Kumar.

“With MCP, the problem is not the inherent design; it’s that every improper server implementation becomes a potential vulnerability. Even one misconfigured server can give the AI the keys to your data.”

Swetha Kumar, security assurance engineer, Microsoft CISO

Our approach is intentionally straightforward.

Start secure by default. Use trusted servers. Keep a living catalog so we always know which voices are in the room. Shape how agents communicate by requiring consent before making changes.

We minimize what’s shared outside our walls, watch for drift, and act when something looks off. Our goal is practical governance that lets builders move fast while keeping our data safe.

That’s the risk we design for, and it’s why our controls prioritize clear ownership, simple choices, and visible guardrails.

“With MCP, the problem is not the inherent design; it’s that every improper server implementation becomes a potential vulnerability,” says Swetha Kumar, a security assurance engineer in the Microsoft CISO organization. “Even one misconfigured server can give the AI the keys to your data.”

Understanding MCP and the need for security

MCP is a simple standard that lets AI systems “talk” to the right tools and data without custom integration work. Think of it like USB‑C for AI. Instead of building a new connection every time, teams plug into a common pattern. That standardization delivers speed and flexibility—but it also changes the security equation.

Before MCP, every integration was its own isolated conversation.

“Now, one pattern can unlock many systems,” Kumar says. “It’s a win and a risk. When AI can reach more systems with less effort, we must be precise about who’s allowed to speak, what they can say, and how much gets shared.”

We frame this as communications security.

The question isn’t just, “Is this API secure?” It’s “Is this a conversation we trust?” We want to know which servers are in the room, what actions they’re permitted to take, and how we’ll notice if something changes. At the same time, we keep the cognitive load low for builders. They choose from trusted options, see clear prompts before an agent makes edits, and move on. Simple choices lead to safer outcomes.

“MCP enables granular control over the tools and resources exposed to the Large Language Model,” Kumar says. “But that means the developer is responsible for configuring it correctly—which tools an agent can see, what actions a server can take, and what context is shared.”

This approach helps both sides.

Product teams get a consistent way to extend their agents while security teams get consistent places to add guardrails—at discovery, access, and throughout the flow of requests and responses. Everyone operates from the same playbook.

When we treat MCP this way, we protect the conversation without slowing it down. We know who’s speaking. We know what they can do. And we can prove it.

Assessing MCP security across four layers

Every MCP session creates a conversation graph. An agent discovers a server, ingests its tool descriptions, adds credentials and context, and starts sending requests. Each step—metadata, identity, content, and code—introduces potential risk.

We evaluate those risks across four layers so we can catch failures early, contain blast radius, and keep conversations in bounds.

However, the big picture is just as important as the details.

“We take a holistic view of MCP security: start with the ecosystem, then specify controls across the four layers,” Kumar says. “The layers make the work concrete, but the goal stays the same—unified governance, shared education, and faster detect-and-mitigate when a server is at risk.”

Applications and agents layer

This is where user intent meets execution. Agents parse prompts, discover tools, select actions, and request changes. MCP clients live here, deciding which servers to trust and when to ask for user consent.

  • What can go wrong
    • Tool poisoning or shadowing. A server advertises safe‑looking actions but performs something else.
    • Silent swaps. A tool’s metadata changes and the client keeps trusting an altered “voice.”
    • No sandbox. The agent can request edits or run code without strong guardrails.
  • What we watch for
    • Unexpected tool descriptions or capabilities at connect time.
    • Edit attempts on critical resources without explicit user consent.
    • Abnormal tool‑selection patterns across sessions.

AI platform layer

The AI platform layer includes the AI models and runtimes that interpret prompts and call tools, along with orchestration logic and safety features.

  • What can go wrong
    • Model supply‑chain drift. Unvetted models, unsafe updates, or compromised fine‑tunes change behavior.
    • Prompt injection via tool text. Descriptions and responses steer the model toward unsafe actions.
  • What we watch for
    • Model provenance and update cadence tied to agent behavior changes.
    • Signals of jailbreaks or instruction overrides in prompts and intermediate messages.
    • Output drift linked to specific tools or servers.

Data layer

This layer covers business data, files, and secrets the conversation can touch.

  • What can go wrong
    • Context oversharing. Session data, files, or secrets get packed into the model’s context and leak to a third‑party server.
    • Over‑scoped credentials. Long‑lived tokens, broad scopes, or wrong audience claims enable lateral movement.
  • What we watch for
    • Size and sensitivity of context passed to tools.
    • Token hygiene, including short lifetimes, least‑privilege scopes, and correct audience claims.
    • Data egress patterns that don’t match a tool’s declared purpose.

Infrastructure layer

The infrastructure layer includes compute, network, and runtime environments.

  • What can go wrong
    • Local servers with too much reach. Excessive access to environment variables, file systems, or system processes.
    • Cloud endpoints without a gateway. No TLS enforcement, rate limiting, or centralized logging.
    • Open egress. Servers call out to the internet where they shouldn’t.
  • What we watch for
    • All remote MCP servers registered behind the API gateway.
    • Runtime signals, such as authentication failures, burst traffic, or unusual geographies.
    • Network policies that restrict outbound calls to certain targets.

Across all four layers, the throughline is AI communications security. We decide who can speak and verify what was said—and keep listening for change.

Establishing a secure-by-default strategy

We start by closing the front door. We recommend every remote MCP server sits behind our API gateway, giving us a single place to authenticate, authorize, rate‑limit, and log. There are no direct calls and no blind spots.

A photo of Enjeti

“Everything we do starts with securing the MCP server by default and that begins by registering it in API Center for easier discovery. We rely solely on vetted and attested MCP servers, ensuring every call comes from a trusted footprint.”

Prathiba Enjeti, principal PM manager, Microsoft CISO

Next, we decide who gets a voice.

Teams choose from a vetted list of MCP servers. If someone connects to an unapproved endpoint, they receive a friendly nudge and a clear path to register it. No shaming—just fast correction and a better inventory the next time around.

Identity comes next. Servers expect short‑lived, least‑privilege tokens with the right scopes and audience. Admin paths require strong authentication, and where possible, we use proof‑of‑possession to bind tokens to the client and reduce replay risk. Secrets don’t live in code, keys rotate, and audit trails are in place.

“Everything we do starts with making the MCP server secure by default and that begins by registering it in API Center for easier discovery,” says Prathiba Enjeti, a principal product manager in the Microsoft CISO organization. “We only use vetted and attested MCP servers. That’s how we keep the conversation safe without slowing it down.“

On the client side, we slow agents at the right moments. Agents can’t touch high‑risk tools without explicit consent. Tool descriptions are verified on connection and compared to approved contracts. If a tool’s “voice” drifts, we block the call.

We also minimize what’s shared.

Context is trimmed to what the task requires. Sensitive data isn’t included by default, and third‑party servers get only what they need—not the whole transcript. Output filters and prompt shields sit alongside the model to prevent risky inputs from becoming risky actions.

Isolation completes the design. Local servers run in containers with tight file and network permissions. Hosted servers allow only the outbound calls they need, and inbound traffic flows through the gateway, with TLS and logging enforced.

Simple rules with visible guardrails.

“We only use vetted MCP servers,” Enjeti says. “That’s how we keep the conversation safe without slowing it down.”

How we run MCP at scale: architecture, vetting, and inventory

We keep MCP safe by making three things intentionally boring: architecture, vetting, and inventory. One defined path. One vetting flow. One living catalog.

Architecture

We recommend remote MCP servers sit behind an API gateway, giving us a single place to authenticate, authorize, validate, rate‑limit, and log. Transport Layer Security (TLS) is required by default, and for sensitive endpoints, we can require mutual TLS. Outbound egress is pinned to approved destinations using private endpoints and firewall rules, so servers can’t “call anywhere.” Runtime protection continuously watches for credential abuse, injection patterns, burst traffic, and odd geographies.

Identity is established up front. We issue short‑lived, least‑privilege tokens with the correct audience and scopes, and admin paths require strong authentication. Where supported, tokens are bound to the client to reduce replay risk. Services use managed identities or signed credentials; secrets don’t live in code, and keys rotate on schedule.

Model‑side safety travels with every conversation. Content safety and prompt shields help models ignore risky inputs, while orchestration enforces a per‑tool allowlist, so an agent can’t call tools that aren’t in policy—even if the model suggests it. We also track model versions, allowing behavior changes to be correlated with updates.

Clients enforce consent at the edge. “Ask before edits” is enabled by default for write, delete, and configuration changes. When an agent connects, it verifies tool descriptions against the approved contract.

Observability ties it all together. We’re working toward logging tool calls, resource access, and authorization decisions end‑to‑end with correlation IDs. Detections flag abnormal tool selection, unexpected data egress, or edits without consent. Every server has an owner, a contract, and an approval record, and metadata changes automatically trigger re‑review. Kill switches live at both the client and the gateway when we need them.

Vetting

We don’t “connect and hope.”

Before any MCP server can speak in our environment, it earns trust. Owners declare what the server does (tools and actions), what it touches (data categories and exports), how callers authenticate (scopes and audience), and where it runs (runtime and on‑call ownership).

We start with static checks: manifests must match the contract, side‑effecting actions must be consent‑gated, tokens must be short‑lived and properly scoped. A SBOM (Software Bill of Materials) must be present, dependencies must be current, and no credentials can be embedded in code.

Then we test like a client would. We snapshot tool metadata on connect and compare it to the approved contract, probe for prompt‑injection and tool‑poisoning, and verify that “ask before edits” triggers for destructive actions.

We also confirm context minimization, validate that egress is pinned to approved hosts, and test resilience under load, including health checks, retry behavior, and isolation using containers with least‑privilege file and network access. Servers are published only when security, privacy, and responsible AI reviews are complete, runbooks and on‑call are in place, and the registry entry is created and pinned.

Inventory

A photo of Janardhanan

“Inventory is the foundation—if we miss a server, we miss the conversation. Every server, regardless of where it’s running or how it’s deployed, must be accounted for in our system.”

Priya Janardhanan, principal security assurance engineering manager, Microsoft CISO

You can’t govern what you can’t see, and MCP shows up in more places than a single system of record. To solve that, we’re building the map from signals and stitch them into one catalog.

“Inventory is the foundation—if we miss a server, we miss the conversation,” says Priya Janardhanan, a principal security assurance engineering manager at Microsoft CISO Operations. “Every server, regardless of where it’s running or how it’s deployed, must be accounted for in our system. Without a complete inventory, we lose visibility into critical operations, risk exposing sensitive data, and undermine our ability to ensure compliance and security.”

Our goal state is that Endpoint telemetry catches developer‑run servers on laptops and workstations. Repos and CI pipelines reveal intent before anything ships. IDEs (Integrated Development Environments) surface local extensions and configured endpoints. The gateway and our registries anchor what’s approved for business data, while low‑code environments tell us which connectors are in use and where they point.

We normalize and correlate those signals with stable IDs for servers, tools, and owners. Ownership is proven through repositories, gateway services, and environment administrators—on‑call contacts included. Exposure is scored based on data touches, scopes requested, egress rules, and change history, so high‑risk items rise to the top of the queue.

Freshness is tracked with last‑seen timestamps, and stale entries are retired over time. Builders can discover and reuse approved servers; reviewers can see what changed since the last approval, and admins get instant visibility into coverage and hotspots.

We’re working toward automated identification and notification for unknow servers. In the ideal state, a registration stub is created when we detect an unknown server on an endpoint. Then, the likely owner is notified, and direct calls are blocked until the server is vetted through an automated process. If tool metadata changes after approval, high-risk actions are paused and routed for re-review, then auto-resumed once approved.

“It all revolves around inventory as the foundation,” Janardhanan says. “If we miss a server, we miss the conversation.”

A photo of Hasan

“Agent 365 tooling servers will allow centralized governance for IT admins. That means a single pane where they can see what’s approved, who owns it, what data it touches, and then apply policy.”

Aisha Hasan, principal product manager, Microsoft Digital

Architecture gives us stable choke points. Vetting keeps weak servers out. Inventory keeps our map current. It’s a single pattern for builders and a unified playbook for security.

Governing agents in low‑code and pro-code scenarios

Makers move fast—that’s the point. A Customer Support team needed a Copilot action to pull case history, so they opened Copilot Studio, selected an approved MCP connector, and shipped a first version before lunch. No tickets. No detours. Governance showed up in the flow, not as a blocker.

“Agent 365 tooling servers will allow centralized governance for IT admins,” says Aisha Hasan, a principal product manager at Microsoft Digital. “That means a single pane where they can see what’s approved, who owns it, what data it touches, and then apply policy. We’re moving toward that consolidation so innovation continues while governance gets simpler and more consistent.”

We place guardrails where makers already work. In Copilot Studio, trusted and verified first-party MCP servers are allowed in developer environments to accelerate innovation and encourage experimentation. Riskier or complex MCP integration is available in Copilot Studio custom environments and other pro-code tools such as Microsoft 365 Agent Tool kit in VS Code and Microsoft Foundry, but only with clear checks: service ownership, security and privacy review, responsible AI assessment, and consent gating for high‑impact actions.

The allowlist is our north star.

Approved MCP servers and connectors live in one catalog with documented owners, scopes, and data boundaries. Makers choose from that shelf. If an MCP server uses an unverified tool, we enforce endpoint filtering. If there is misconfiguration, we open a task for the owner and help them build securely.

Permissions stay tight without adding cognitive load. Tokens are short‑lived and scoped to the task. Context is trimmed so only the necessary fields flow to the tool. Third‑party servers never get the full transcript. If a connector’s capabilities change, the runtime compares the new “voice” to what we approved. MCP Clients should pause risky actions, notify the owner, and resume automatically once reviewed.

With agent inventory in Power Platform Admin Center and registry in Agent 365, admins get a clean view on which connectors are active, who owns them, what data they touch, and how often they’re called. Organization policies such as DLP and MIP can be enforced in a unified way , with a re‑review when capabilities change. The goal is simple: let builders innovate confidently and securely while maintaining security and compliance.

“MCP servers are powerful AI tools that enable agents to seamlessly integrate and interact with enterprise data and transform business workflows,” Hasan says. “That means the same enterprise data and governance principles are applied equally to MCP servers and other connectors. A robust inventory, an agile policy framework, and an automated workflow for enforcement are cornerstones for successfully governing agents at scale.”

Securing MCP at scale: Operating, monitoring, and enabling

Our work doesn’t stop at go‑live. Once an MCP server is in the catalog, we operate the conversation like a service: measurable, observable, and responsive. Identity and policy guard the front door, but runtime is where we prove the controls work without slowing anyone down.

In practice, operating MCP at scale comes down to four motions:

Observe every tool call end to end. We make the flow observable. Every tool call carries a correlation ID from client to gateway to server and back. Prompts, tool selections, authorization decisions, and resource access should belogged with consistent schemas. Golden signals—latency, errors, saturation—sit alongside safety signals like unexpected egress or edits without consent. Owners and security teams see the same dashboards.

Detect drift and abnormal behavior early. Detection lives close to the work. We flag abnormal tool patterns, spikes in write operations, burst traffic from new geographies, and context sizes that don’t fit a task. We continuously compare a tool’s “voice” at connect time to the approved version; drift automatically pauses risky actions and pings the owner. Cost controls double as guardrails, using rate limits and budgets to cap blast radius and surface runaway loops early.

Respond with precision instead of blunt shutdowns. Response is graded, not binary. We can block destructive actions and allow reads, or throttle a noisy client without killing the session. Kill switches exist at both the client and the gateway. Playbooks are pre‑approved and integrated into the consoles owners already use, and dry runs are part of muscle memory, so the first switch flip doesn’t happen during an incident.

We treat model behavior as part of operations. Content safety and prompt shields run in production, not just in tests. We pin model versions and watch for output drift after updates. If a model starts suggesting tools out of character, the owner gets paged with the exact prompts and calls that triggered it.

Telemetry respects privacy. Logs avoid sensitive payloads by default and mask what must pass through for forensics. Access is role‑based, retention follows policy, and audit readiness is designed in on day one.

Enable builders through templates, education, and reuse. Adoption and education run in parallel. Builders get templates that enable best practices: sample manifests with consent gates, CI checks for token scope and SBOMs, and gateway stubs with sane defaults. A “ten‑minute preflight” runs locally to verify contracts, test consent flows, and check egress before a pull request is opened. IDE lint rules catch common issues early.

“This is how we operate MCP at scale,” says Janardhanan. “Observe the conversation, detect drift early, respond with precision, and teach habits that make the right path the easy path. We run it like a product because that’s what it is.”

Measuring results and moving forward

This program has changed how we build. Reviews move faster because every server follows the same path. Drift is caught early because clients compare a tool’s “voice” on connection. Shadow servers decline as inventory fills in from endpoint, repo, IDE, and gateway signals. Reuse increases because teams can discover trusted servers instead of creating new ones. Incidents resolve faster with correlation IDs across the conversation and kill switches at both the client and the gateway.

It’s also changed how our admins work. One gateway means one perimeter to manage. Policies land once and apply everywhere. Owners see the same telemetry security sees, so fixes happen where the work happens.

Going forward, we’re focused on more consolidation and automation. We’re moving toward a single pane for MCP governance—approve, monitor, and pause from one place. Policy-as-code will keep allowlists, consent rules, and egress boundaries versioned and testable in CI.

Our preflight checks will get smarter, with stronger injection tests, automatic egress validation, and environment‑aware templates. We’ll expand consent patterns so high‑impact actions remain explicit and auditable, even across multi‑tool chains. And we’ll keep shrinking re‑review time, so drift is measured in minutes, not days.

AI conversations are now part of how we build every day. MCP standardizes how agents talk to tools and data. Secure‑by‑default architecture, rigorous vetting, and a living inventory, ensure the right voices stay in the room, only what’s needed is shared, and drift is caught early.

The result is simple: teams ship faster with fewer surprises, and governance stays visible without getting in the way. We’ll keep tightening the loop, so saying yes remains both easy and safe.

Key takeaways

If you’re implementing MCP security, consider these key actions to ensure secure, efficient adoption in your organization:

  • Build governance into the maker flow. Embed security, consent, and responsible AI checks directly where teams build—so protection shows up by default, not as an afterthought.
  • Maintain a single allowlist and catalog. Centralize approved MCP servers and connectors with clear ownership, scope, and data boundaries.
  • Enforce scoped, short-lived permissions by default. Automatically limit token scope and duration to minimize risk and exposure.
  • Monitor continuously and detect drift early. Observe activity, flag deviations, and pause risky actions until reviewed and approved by owners.
  • Automate incident response and controls. Leverage pre-approved playbooks, kill switches, and rate limits for fast, precise action.
  • Design for privacy and auditability from day one. Mask sensitive data, restrict log access by role, and endure audit readiness.
  • Promote education and reuse. Provide templates, training, and feedback loops to encourage safe development and adoption of trusted servers.

The post Protecting AI conversations at Microsoft with Model Context Protocol security and governance appeared first on Inside Track Blog.

]]>
22324
Picking the right Copilot for the job: Tips from our experience at Microsoft http://approjects.co.za/?big=insidetrack/blog/picking-the-right-copilot-for-the-job-tips-from-our-experience-at-microsoft/ Thu, 12 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=22334 Since its launch in 2023, Microsoft 365 Copilot has evolved from a single AI assistant into a full squad of powerful AI sidekicks, including chat, search, agents and many more. And with the introduction of agents, Copilot can now also act on your behalf—agents extend the capabilities of Microsoft 365 Copilot beyond conversation, giving you […]

The post Picking the right Copilot for the job: Tips from our experience at Microsoft appeared first on Inside Track Blog.

]]>
Since its launch in 2023, Microsoft 365 Copilot has evolved from a single AI assistant into a full squad of powerful AI sidekicks, including chat, search, agents and many more.

And with the introduction of agents, Copilot can now also act on your behalf—agents extend the capabilities of Microsoft 365 Copilot beyond conversation, giving you the power to elevate how you work, create, and make decisions.

 A photo of Etchells.

“Copilot agents free you from the manual work, so you can concentrate on big-picture thinking.”

Eva Etchells, senior content program manager, Microsoft Digital

The challenge today isn’t whether to use an agent or Copilot module to help you accomplish more—it’s knowing which one to use, and when to use it. Making the smart choice can help you produce amazing work while streamlining workflows and reducing friction.

“Copilot agents free you from the manual work, so you can concentrate on big-picture thinking,” says Eva Etchells, a senior content program manager in Microsoft Digital, the company’s IT organization.

Copilot thinks; agents ‘do’

Agents, simply explained, are purpose-built tools designed to automate tasks, handle repeatable work, and save time by improving efficiency. You can even create your own agents to match the way you work.

A photo of Burnett.

I use several agents to simplify repetitive daily tasks. They help me stay organized, quickly research what I need, and analyze information so I can focus my energy on the work that requires the most strategic thinking.”

Opeoluwa Burnett, senior product manager, Microsoft Digital

If Copilot and its modules help us think, create, and explore, then think of its agents as entities that execute and automate tasks.

Choosing the right agent or module is like selecting the right tool for a job: You want the one that fits the task at hand and helps you get your work done more quickly with less effort.

“Now I can quickly ask an agent to create a one page vision document in Word because the agent does the heavy lifting,” says Opeoluwa Burnett, a senior product manager in Microsoft Digital. “I use several agents to simplify repetitive daily tasks. They help me stay organized, quickly research what I need, and analyze information so I can focus my energy on the work that requires the most strategic thinking.”

Read about how Opeoluwa Burnett uses Copilot

A day in the life of a Microsoft employee using Copilot

Facing agent adoption challenges

At Microsoft, we’re still navigating a few common challenges related to agent adoption:

  • They have access to agents and the ability to create them but often feel overwhelmed or unsure where to begin.
  • For those still learning Copilot, agents can feel like an additional hurdle.
  • Others who’ve embraced “regular” Copilot may not realize that agents exist or know how to find them.

Our use of Copilot and AI agents continues to evolve. As Customer Zero within Microsoft Digital, we want to share how we’re using agents today, as well as what we’ve learned along the way.

Here’s a rundown of how our employees are using Copilot tools and agents to accomplish tasks faster and more efficiently:

Where to begin: Copilot Chat

Chat is often the starting point—the launchpad where you provide a prompt and kick off your interaction with Copilot.

Screenshot of the Copilot Chat launchpad.
The Copilot Chat module is where you can begin your interactions with Copilot.

Here you can search for general answers, explore complex queries, get quick results, or discover a Copilot agent that can help you complete your task.

Photo of Malekar.

“Copilot is a productivity booster. I can ask it to help me brainstorm and structure a use case and the results are pleasantly surprising, especially as the Copilot ecosystem continues to evolve and we fast-track new capacities.”

Swapna Malekar, principal product manager, Microsoft Digital

When Swapna Malekar needed to create a presentation with a short turnaround time, she turned to Copilot. So Malekar, a principal product manager in Microsoft Digital, shared a screenshot of the slide she was planning to present with Copilot. The tool generated a presentation-ready script she could then read aloud in her meeting later that day.

Now, she incorporates Copilot into her everyday workflows.

“Copilot is a productivity booster,” Malekar says. “I can ask it to help me brainstorm and structure a use case and the results are pleasantly surprising, especially as the Copilot ecosystem continues to evolve and we fast-track new capacities.”

Seamless workflows with Copilot applications

Because Copilot is built into Microsoft 365 apps like Word, Excel, PowerPoint, OneNote, OneDrive, and Teams, you can navigate seamlessly between tools without losing context. Your Copilot Chat history follows you, no matter where you start.

That flexibility means you can work the way you naturally do. You might start a Copilot Chat in Word while drafting a document, then switch to Excel or Teams and continue the same conversation without needing to reset or start over.

There’s no single “right” way to use Copilot. Everyone approaches work differently, but Copilot meets you where you are, whether in the browser or in your go-to app, while helping you reach the same solution.

“Choosing the right Copilot for the job is like being in one of those ‘Choose Your Own Adventure’ books,” Etchells says. “You pick the path you want to go, and you set off on your journey.”

Speed and efficiency: Copilot search

Copilot search shares the same underlying technology as chat. The purpose of the search function in Copilot is to process requests and retrieve results. The difference between the two lies in how the results are delivered.

Chat is designed for more explorational interactions, while search prioritizes fast, targeted access to content and links.

“The value prop for Copilot search is simple: Get what you’re looking for faster.”

Vasanthi Vangipurapu, senior product manager, Microsoft Digital

Search administrators also have access to the admin portal, where they can customize features such as bookmarks that know what employees are usually looking for when they search common terms.

“The value prop for Copilot search is simple: Get what you’re looking for faster,” says Vasanthi Vangipurapu, senior product manager in Microsoft Digital. “When I need specific answers quickly, I use Copilot search. If I want to explore further, I love that it redirects me to Copilot Chat to continue my conversation there.”

Any employee

What Copilot search can do: Find a shared file when you have limited details.

Sample prompt: “Find the file shared with me by (name) within the last six months. I don’t remember where it was shared. Search across Outlook, Teams, OneDrive, and SharePoint.”

Data compliance manager

What Copilot search can do: Understand what data Copilot can access, how it’s processed, and how residency and retention of data are handled.

Sample prompt: “Explain what data Microsoft 365 Copilot can access within my organization, including how it respects existing permissions and role-based access controls. Describe how data residency is handled for Copilot processing and outline what logging, retention, and audit trail information is available to administrators.”

Technical writer

What Copilot search can do: Generate a cloud architecture diagram or flow chart to support documentation.

Sample prompt: “Create a vector-style cloud architecture diagram showing users, load balancers, web servers, application tier, and cloud database. Use minimalistic icons, blue/gray palette, simple arrows, and white background.”

Visuals at your fingertips

Copilot Create is a design generator that helps you produce visual assets such as images, posters, infographics, banners, branding, and video. It’s an especially useful tool for people who aren’t professional designers, but who need to create visuals quickly as part of their workflow.

The Create module also supports rapid iteration, making it easy to refine results without starting from scratch. You can adjust layout, tone, or visual direction through simple prompts. This lets you explore multiple approaches and keep creative momentum without getting bogged down in detail.

Screenshot of the Create module landing page in Copilot.
You can use the Copilot Create module to generate a variety of compelling visual assets.

You can give Create a prompt, even a rough one. It often results in unexpected visual directions you may have not considered on your own—a bit like having an enthusiastic creative partner who’s tossing new issues and helping you discover fresh variations.

While you can also use Copilot Chat to generate visual assets, Copilot Create offers a consolidated experience specifically built for visual design.

Here are prompts you can try in the Copilot Create module:

Marketing manager

What Copilot Create can do: Turn a PowerPoint deck into a branded marketing video for a product launch.

Sample prompt: “Turn this PowerPoint deck into a high-quality, 45- to 60-second marketing video designed for prospects and customers.

Tone: modern, energetic, and brand-aligned

Include: clear voiceover script, punchy on-screen text, smooth transitions

Highlight: key value props and visuals from each slide

Add: subtle animation and upbeat music

Output: 1080p MP4 video + options for a shorter cut and social formats”

HR manager

What Copilot Create can do: Create an employee-friendly infographic from a policy document.

Sample prompt: Turn this HR policy document into an engaging infographic.

Audience: all employees

Style: simple, friendly, and easy to scan

Include: key rules, do/don’t lists, and any required steps

Use: icons, color coding, and clean layout

Output: a single-page PNG plus a version sized for intranet posting”

Analysis and insights: Copilot Researcher agent

The Copilot Researcher agent acts as your supercharged research partner, providing deep analysis and generating detailed reports. You can use Researcher to quantify the expected impact of a new feature, gather usage data, analyze audience insights, and project outcomes based on target user logistics.

Here are some prompts you can use to get started with Copilot Researcher:

Product manager

What Researcher agent can do: Quarterly product feature planning

Sample prompt: “Review emails, files, and meeting transcripts, to surface insights about where employees experience friction in daily workflows.”

Business analyst

What Researcher agent can do: Documentation optimization and process improvement

Sample prompt: “Analyze the following documentation and generate detailed, actionable ideas to improve clarity, structure, usefulness, and alignment with business goals.”

Engineer

What Researcher agent can do: Improve upon code

Sample prompt: “I want to improve the following code for a software feature (insert detailed description, including the software name, programming language, targeted platforms, and what it does). Help me come up with ways to make the code better using best practices. Generate clean, optimized code and explain the rationale behind each decision.”

Streamlined operations: Employee Self-Service Agent

The Employee Self-Service Agent helps employees quickly find answers to their questions relating to human resources, IT support, and campus services topics.

This tool now serves as a centralized entry point for HR, IT, and facilities support at Microsoft. The agent removes the guesswork, delays, and frustration that our employees used to experience when searching across multiple systems, websites, and knowledge bases for answers to their employment-related queries.

“Our employees rely on AI tools like Copilot to help get their work done,” says Becky West, a principal group product manager for Microsoft Digital. “And the same is now true for resolving an issue related to facilities and other high-prio employee self-service topics.”

Here are some prompts that you can use to get help from your Employee Self-Service Agent:

Intelligent collections: Copilot Notebooks module

The Copilot Notebooks module is an interactive workspace that combines the flexibility of a notebook with the intelligence of an AI notepad. Copilot makes it easy to add your chats to a Copilot Notebook, where it can review all included content, summarize information, and answer questions about it—making it easier to navigate large collections of files, presentations, and notes. Notebooks can also be shared, making them useful for teams collaborating on a common goal.

For perspective, Copilot Notebooks is designed for project-based work where you can gather files, references, notes and have Copilot collectively reason over them, while Copilot in OneNote enhances notetaking, content creation and not project-specific reference modeling.

Some of our employees use Copilot Notebooks to prepare for their performance reviews. Instead of scrambling to gather six months of their work, feedback, and other documentation, they easily can assemble everything in one place using the Notebooks module.

“I can take all the campaigns I’ve worked on, the metrics, and any praise I’ve received, drop it all into a Word doc and add it to my Review notebook,” Etchells says. “Then I ask Copilot to tell me how I contributed to each campaign. It saves me a ton of time.”

Here are prompts you can use in the Copilot Notebooks module to do something analyze the impact you have had as a seller over a certain window of time: 

I’m a seller and I want to summarize my impact over the last quarter

What content Notebooks can hold

Pipeline health analyses, accounts prioritized based on intent signals, deal outcomes correlated with activities (calls, emails, meetings), QBR visuals

Sample prompt to create Notebook

“I’m a sales executive. Build me a Copilot notebook that:

Ingests CRM CSV/XLSX, validates schema, and summarizes columns.

Computes KPIs (pipeline value, #opps, win rate, avg cycle) and visuals: stage value bar, conversion funnel, win-rate heatmap (industry × product).

Flags stale/stuck opportunities; creates a transparent 0-100 risk score with explainable factors; outputs Top 20 risky + Top 20 high-potential deals.

Builds a simple forecast (optimistic/likely/conservative) from historical stage-to-win rates and charts forecast vs. target.

Surfaces segment/account insights; exports 2 CSVs (prioritized exec‑outreach + risk register).

Generates a 1-page executive summary, 5-7 QBR bullets, and a 3-sentence email for the field.”

SharePoint agents

SharePoint offers two types of Copilot agents: the built-in Knowledge agent and a custom agent.

Photo of Bodhanampati.

“You ask the question and the agent provides the answer, so you can focus on the work, not the search.”

Sunitha Bodhanampati, senior product manager, Microsoft Digital

The Knowledge agent acts like a SharePoint content steward, analyzing and organizing content across your sites. It tags and structures information in ways that allow Copilot to deliver more accurate answers to site-related queries.

You can also create custom agents to manage specialized workflows, connectors, or administrative tasks. You define the agent’s rules and scenarios, and it can operate across other apps and external systems, not just SharePoint.

“Instead of navigating countless folders, files, and links, agents remove the need to remember where information lives,” says Sunitha Bodhanampati, a senior product manager in Microsoft Digital. “You ask the question and the agent provides the answer, so you can focus on the work, not the search.”

Here are some SharePoint agent prompts you can try, depending on your role:

Content manager/site owner

What the agent can do: The Knowledge agent can update and improve content quality so Copilot can reason more accurately across it.

Sample prompt: “Review this library and auto-tag all documents with owner, category, and review date info. Show me any pages with missing details or broken links.”

HR helpdesk

What the agent can do: The SharePoint custom agent can create an agent that responds to department-specific questions using SharePoint data or other systems.

Sample prompt: “Create an agent that answers policy questions using our HR SharePoint library and routes complex requests to the HR team.”

Operations analyst

What the agent can do: The SharePoint custom agent can build a multistep workflow agent that merges with CRM and ticketing.

Sample prompt: “Build an agent that checks open support tickets, summarizes urgent ones, retrieves related SharePoint documentation, and notifies the team in Teams.”

Business owner

What the agent can do: The SharePoint custom agent can standardize approvals and record‑keeping across sites—validating required fields, routing items for review, posting updates, and compiling summaries—so routine requests move faster with clear ownership. (You can also tailor its behavior and starter prompts when you create it.)

Sample prompt: “Build an agent that validates new entries in the ‘Procurement Requests’ list, routes them to the right approver, writes back status and PO number when approved, and posts a daily summary with exceptions to our Teams channel.”

Site visitor

What the agent can do: The ready‑made SharePoint agent (included on every site) acts like a site concierge—answering questions, summarizing pages and libraries, and pointing people to the right documents and owners, all scoped to the site and the visitor’s permissions.

Sample prompt: “I’m new to this site—give me a two‑paragraph overview, list the three most important pages to read this week with their owners, and build a one‑page starter checklist with links.”

Create your own agent

If you don’t find a Copilot agent that meets your needs, you can create your own. Getting started is as easy as telling Copilot what your ultimate objective is, even if you don’t have all the specifics.

“Just ask Copilot, ‘How do I get started with an agent?’” Etchells says. “Copilot will walk you through it, step-by-step.”

One of our teams in Microsoft Digital built an internal agent we dubbed the Copilot Agent Ideation Partner. This is useful for employees who are just exploring or ready to build, as it helps employees brainstorm agent ideas by spotting repetitive tasks, uncovering work patterns, and turning everyday challenges into actionable concepts they can build into an agent.

“Every employee should build at least one agent,” Burnett says. “When you turn your daily patterns into an agent, you reclaim your time and free yourself up to focus on the work that matters most.

The future of agents

Each agent and module has its own unique strengths. Together, they are part of a broader, AI-powered shift toward helping our employees be more productive and efficient every day.

As the number and variety of agents grows, we’re continuing to raise awareness among employees and our customers about what agents are available and how they can start putting these game-changing capabilities to work.

“We’re still focused on helping people understand what agents can do and how they fit into our everyday work,” Etchells says. “As agents evolve, the goal is to make them easier to discover, try out, and apply within the workflows our employees are already used to.”

Key takeaways

Here are some things to keep in mind as you move along in your journey with Copilot agents and modules:

  • Copilot is more than one tool. You can choose from multiple Copilot modules and agents designed for different tasks, roles, and scenarios.
  • Selecting the right Copilot unlocks targeted results. Matching the right Copilot to the job reduces friction and helps create seamless workflows.
  • Copilot agents enhance productivity and creativity. Whether through Copilot Chat, search, research, notebooks, or other specialized agents, each Copilot agent unlocks efficiency while sparking innovative ideas.
  • Copilot agents are evolving into collaborators. These agents are reshaping how people learn, work, and innovate every day.

The post Picking the right Copilot for the job: Tips from our experience at Microsoft appeared first on Inside Track Blog.

]]>
22334
Reimagining campus support at Microsoft with the Employee Self-Service Agent http://approjects.co.za/?big=insidetrack/blog/reimagining-campus-support-at-microsoft-with-the-employee-self-service-agent/ Thu, 13 Nov 2025 18:25:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=20977 Julie is a typical Microsoft employee, one who commutes to her office, parks in a garage, orders meals from the cafeteria, finds her way to and around different buildings, hosts visitors, and occasionally must deal with a facilities-related service request. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are […]

The post Reimagining campus support at Microsoft with the Employee Self-Service Agent appeared first on Inside Track Blog.

]]>
Julie is a typical Microsoft employee, one who commutes to her office, parks in a garage, orders meals from the cafeteria, finds her way to and around different buildings, hosts visitors, and occasionally must deal with a facilities-related service request.

In the past, Julie might have interacted with different apps and websites to get help with each of those tasks. Today, thanks to the power of agentic AI and Microsoft Copilot Studio, Julie can turn to a single portal to handle all of it: the Employee Self-Service Agent.

This agentic tool, which will soon be released publicly as a free add-on for the Microsoft 365 Copilot license, has already made a big impact on the lives of our employees, saving them time, effort, and frustration. We call it the “one-stop shop” experience of employee self-service.

“Before we had the Employee Self-Service Agent, the employee-assistance experience was fragmented across mobile, websites, and physical kiosks,” says Becky West, a principal group product manager in Microsoft Digital, the company’s IT organization. “The new agent unifies all of these experiences and puts them in the same place.” Now our employees can ask questions in natural language, and it guides them through whatever campus experience they need to do—invite a guest, find dining options, create a help ticket, etc.

West in a photo.

“Our employees rely on AI tools like Copilot to help get their work done. And the same is now true for resolving an issue related to facilities.”

Becky West, principal group product manager, Microsoft Digital

Of course, employees like Julie also need assistance with other common job-related tasks, like getting their human resources (HR) questions answered or fixing a technical issue with their device.

Those are also important categories included in the Employee Self-Service Agent, something the flexibility and extensibility of Copilot Studio makes possible.

“Our employees rely on AI tools like Copilot to help get their work done,” West says. “And the same is now true for resolving an issue related to facilities, HR, or IT support. We live in an AI-powered world, and this agent meets the moment for our people.”

In this story we share how we’re using the Employee Self-Service Agent in the real estate and facilities space, but it does much more than that. Our employees also use it to get help with IT problems and answers to their HR queries, and we expect to add other key areas soon, such as finance and legal. Available to all Microsoft employees worldwide, the full agent is already delivering a significant boost in productivity, cost savings, and user satisfaction across the company.

Everyday use cases for agentic assistance

Julie might not need IT support or help with an HR issue every day. But she’s always on the hunt for her favorite foods for lunch.

In our existing dining app, employees could look up that day’s menu for a specific building cafeteria, but they couldn’t just ask, “Hey, where can I get some good teriyaki on campus today?”

With the Employee Self-Service Agent, now they can.

“Searching on type of cuisine or dish is one of the top requests we were getting,” says Balaji Radhakrishnan, principal software engineering manager for the dining team. “It was an important feature missing from our existing apps, and we solved that with the employee-assistance agent.”

Employee Self-Service Agent screenshot

A screenshot shows an employee query looking for teriyaki and the agentic response listing multiple locations where the dish is being offered that day.
The AI-driven power of natural-language querying means that employees can simply ask the Employee Self-Service Agent where their favorite food is being served on campus, rather than spending valuable time perusing different café menus in the unending quest for the best teriyaki.  

Not only can the agent help Julie locate the perfect lunch, it also connects her to the tool where she can order and pay for it. This streamlines the process for her—she doesn’t have to remember which website or app to call up to procure her teriyaki treat. (In the future, we plan to extend the functionality so the agent remembers your previous food choices, and you can order right from the agent.)

Dining is just one of the facilities-related experiences we targeted when developing the Employee Self-Service Agent. Other tasks include:  

  • Lobby and visitor services – registering a campus guest
  • Parking – registering a car to park on campus
  • Maps – navigating around a building or a campus
  • Facilities tickets – getting help with office furniture, lighting, HVAC, or other building issue
  • Transportation – calling a shuttle for a ride between buildings or finding commuting help
  • Finding a space – locating a place to relax, work, or connect with colleagues

“We started out by looking at the services we already offered,” West says. “We thought about what tasks would be in highest demand, where that information or transaction lived now, and how best to surface it. The more we explored the power of the agent, the wider the variety of experiences we were able to incorporate.”

Saving time and reducing frustration

Resolving employee pain points and saving time are two of the key advantages inherent to this area of agentic employee assistance. Consider the common employee task of registering a business-related campus guest (such as an interview candidate or a prospective customer).

Bhavani in a photo.

“If we can handle 50%—600,000—of these business-related visitor registrations through the Employee Self-Service Agent, that adds up to 50,000 hours of employee time each year.”

Bhavani Paruchuri, senior product manager, Microsoft Digital

According to Bhavani Paruchuri, a senior product manager in Microsoft Digital, in 2024 Microsoft saw more than 2 million registered visitors at our buildings worldwide. Roughly 1.2 million of these were business-related guests.

Previously, employees had to email or talk to lobby hosts (front-desk staff) when they wanted to register a guest; the host would then enter visitor details into the Guest Management System. Now, the Employee Self-Service Agent provides a simple form within the chat, asking for details like guest name, email, purpose, building number, and date. Once the form is submitted, the system confirms it and sends a QR code directly to the guest via email.

“We calculated that this new process could save at least five minutes for each guest registration,” Bhavani says. “If we can handle 50%—600,000—of these business-related visitor registrations through the Employee Self-Service Agent, that adds up to 50,000 hours of employee time each year. So, just in this one area alone, the agent can have a big impact on overall productivity.”

Those savings add up, and quickly.

Downing in a photo.

“Once you start using the agent for dining, you use it daily. As we added in cuisine and price filtering and other functionality that wasn’t available before, you could see it was a big differentiator from what the previous tools could do.”

Erik Downing, principal product manager, Microsoft Digital

One of the reasons we decided to include facilities-related help early on in the development of the Employee Self-Service Agent is that these common tasks would help increase usage of the new portal—building a habit with our workers that would have long-term benefits.

We have already seen employees used to finding a meal with the agent also using it to solve other challenges, including in the HR and Support spaces.

“Once you start using the agent for dining, you use it daily,” says Erik Downing, a principal product manager with Microsoft Digital. “As we added in cuisine and price filtering and other functionality that wasn’t available before, you could see it was a big differentiator from what the previous tools could do.”

West explains how this can have an outsized effect on promoting product adoption.

“If people get in the daily habit of using the agent for these routine tasks, they’ll be more comfortable going to it for other things,” West says. “Then you can really start to scale the agent up and see the larger impact across more areas.”

Filing a service request with the help of AI

Julie gets to work one morning and is dismayed to discover that her adjustable desk will no longer rise to a standing position. She needs to open a facilities ticket for help.

Choudary in a photo

“The AI automatically picks out the problem class and the problem type; presents a form with the details; asks for confirmation; then kicks off the ticket right from there. It’s all in one place, AI-driven, and truly agentic in terms of task completion—and it will only get better.”

Sonaly Choudary, senior product manager, Microsoft Digital

In the past, this would have required Julie to send Facilities an email with a description of the problem, or she would have had to track down the right app or web form for the same purpose.

Now, she can simply snap a photo of the broken desk and upload it to the Employee Self-Service Agent.

The agent will open a form and use information from the photo to create the help ticket right there. This image-based technology, like natural-language chat, is something that our previous apps couldn’t do, which reflects the power of AI. 

“Whether you upload a photo or just describe your issue using natural language, we’ve really pushed this tool to be as agentic as possible,” says Sonaly Choudary, a senior product manager who works on facilities technology products for Microsoft Digital. “The AI automatically picks out the problem class and the problem type; presents a form with the details; asks for confirmation; then kicks off the ticket right from there. And then you can query the agent to get status updates on it. It’s all in one place, AI-driven, and truly agentic in terms of task completion—and it will only get better.”

How Customer Zero makes our products better

Because Microsoft employees are the first ones to use our newest products and features, we have the opportunity to roll them out gradually and test them under actual enterprise-work conditions, which enables us to gather valuable feedback and telemetry. This data is then fed back into the product development process to make key improvements. We call this our Customer Zero philosophy.

Schaefer in a photo.

“We were pioneers as Customer Zero in showing the need for these services in an employee-assistance portal, and the product group saw that need.”

Michelle Schaefer, principal product manager in Microsoft Digital

In the case of the Employee Self-Service Agent, we began product development by tackling HR and IT support, which were key areas to capture cost savings.

But how could we get even wider usage of the product? We turned to our real estate and facilities functions.

“The facilities and real estate aspect of Microsoft Digital is unique, in that it focuses on the employee experience at the company, literally in the buildings,” says Michelle Schaefer, a principal product manager in Microsoft Digital. “All those tasks—getting lunch, parking, filing a facilities ticket, moving around the campus, inviting a guest—are universal for all our employees. We were pioneers as Customer Zero in showing the need for these services in an employee-assistance portal, and the product group saw that need. And we’re constantly gathering telemetry to learn how our workers can more easily discover the agent and have a better experience with it each time.”

Adding the facilities and real estate category to the Employee Self-Service Agent also helped our engineers learn more about building an agent that presents a “single pane of glass” to the user on the front end but incorporates so many different functions on the back end.

Po in a photo.

“Our strategy with this new natural-language agent is to augment our existing tools, which brings AI to the experience and gets the user to the right place.”

Thomas Po, senior product manager, Microsoft Digital

Each team has its own tools that compete for our employees’ attention.

“The challenge was to turn all those into a common experience for the user,” says Erik Orum Hansen, a principal engineering manager for Microsoft Digital. “That’s been a learning journey for us, as the organization pivoted to developing a single agent incorporating all these different functions.”

This single-portal approach makes it so much easier for users to explore their options and figure out the best way to accomplish the task, even as the underlying tools are still available.

We still have as many as 15 different tools that employees use today for campus related tasks, but we’re managing them more effectively—now our employees only need to use them when their use case is more challenging or detailed in nature.

“Our strategy with this new natural-language agent is to augment our existing tools, which brings AI to the experience and gets the user to the right place,” says Thomas Po, a senior product manager for Microsoft Digital. “The user may not have the specific facilities app they need on their phone, but everyone has Copilot, right? It’s about giving our employees access to information in more places and connecting them to the right tool or function.”

Employee Self-Service Agent screenshot

A screenshot shows the Employee Self-Service Agent providing a pre-filled form to help the user complete their shuttle booking.
The Employee Self-Service Agent not only answers user questions, it also can pull up a form and pre-fill fields to help them execute their task—such as booking a shuttle from one campus building to another. 

The Employee Self-Service Agent can also see when an employee took prior action, recognize that they might want to take the same action again, and suggest that action—for example, suggesting that they may want to reserve a shuttle ride to the same location they’ve visited previously.

“This allows users to have a more contextual, conversational experience,” says Ram Kuppaswamy, a principal software engineering manager in Microsoft Digital. “For example, for transportation needs they can just type, ‘Help me book a campus shuttle,’ and the agent can suggest options based on their previous ride history. Then it can call up a form to help complete the booking. Users really love it.”

Built on the power of Copilot Studio

We built the Employee Self-Service Agent with Microsoft Copilot Studio, a powerful platform that allows you to create and extend AI agents. The agent is designed so that our customers can customize it to fit their own business needs and integrate it with their existing technologies.

Orum Hansen in a photo.

“We didn’t want a custom connector; we wanted to go with an out-of-the-box connector that worked with Dynamics,” he says. “There were some product iterations to deal with while we made sure it met Microsoft’s data-compliance standards, but ultimately it made it easier to show customers how simple it is to implement the agent—it’s a very low-code/no-code solution.”

Erik Orum Hansen, principal engineering manager, Microsoft Digital

When we built the part of the Employee Self-Service Agent that handled HR and IT Support needs, we were able to create connectors for major third-party service providers in those areas, such as Workday, SAP, and ServiceNow. (These connectors are now “out-of-the-box capabilities” that are included in the product.)

In the facilities and real estate space, we have numerous vendors that we work with to provide various campus services. Since we already used various existing internal applications to connect employee requests with these vendors, we were able to create connectors for the agent easily using Copilot Studio. More importantly, we were also able to use the out-of-the-box Dataverse connector that worked with our Dynamics 365 data, which cut down on development time.

“The agent functions as a single entry point, which then connects with the Microsoft Dynamics data,” Schaefer says. “We have numerous different facilities vendors in different parts of the world, but we didn’t have to build multiple connectors to those vendors because of the common Dynamics back end.”

Orum Hansen says this caused a small delay in the internal deployment of the product, but that it was worth it in the end.

“We didn’t want a custom connector; we wanted to go with an out-of-the-box connector that worked with Dynamics,” he says. “There were some product iterations to deal with while we made sure it met Microsoft’s data-compliance standards, but ultimately it made it easier to show customers how simple it is to implement the agent—it’s a very low-code/no-code solution.”

Gregersen in a photo.

“We’re also previewing more multi-agent capabilities that are coming from Copilot Studio, which our customers will be able to incorporate into their own solutions. The product is just going to get richer and richer over time, as it extends into other lines of business.”

Kirk Gregersen, corporate vice president, Microsoft Viva and Microsoft 365 Copilot Experiences

The future of workplace AI

In many ways, we’re still in the early stages of the revolution that AI agents are going to bring to the workplace.

But the Employee Self-Service Agent is a significant early marker on that path.

“The first step is to develop this agent that’s optimized for the HR, IT, and facilities verticals,” says Kirk Gregersen, corporate vice president of product for Microsoft Viva and Microsoft 365 Copilot Experiences. “We’re also previewing more multi-agent capabilities that are coming from Copilot Studio, which our customers will be able to incorporate into their own solutions. The product is just going to get richer and richer over time as it extends into other lines of business.”

As employees like Julie are already finding out, this new era of agentic AI is going to be a major improvement over what came before.

“Most companies already have some kind of employee-assistance portal solution,” Orum Hansen says. “With this new agent, there’s an opportunity to really reimagine the entire experience—to shed some of the old baggage and figure out how to do things differently. It’s going to lead to a more efficient workplace, along with more satisfied employees.”

Key takeaways

Here are a few factors to remember when implementing an AI-powered employee-assistance solution at your company:

  • Pick high-value targets. Consider employee needs and the most commonly used assistance functions (using data where available), then develop a solution that addresses those areas. This will drive adoption and daily use of the agent.
  • Customize the solution. Take advantage of the extensibility of Copilot Studio to develop an agent that fits your organization’s specific needs.
  • Augment existing tools. Your employee-assistance agent can be the front door through which users find the tool they need. Over time, you can retire legacy tools and portals as the agent is able to complete the same functions on its own.
  • Go beyond information retrieval. Employees want to be able to carry out tasks right from the agent, so incorporate forms and other technologies that allow them to accomplish their goal as quickly and easily as possible.
  • Think outside the box. The image-driven feature we developed for filing a facilities ticket is a great example of applying the revolutionary abilities of AI to solve problems in new and innovative ways.    

The post Reimagining campus support at Microsoft with the Employee Self-Service Agent appeared first on Inside Track Blog.

]]>
20977
Deploying our new ‘game changing’ Interpreter agent in our meetings at Microsoft http://approjects.co.za/?big=insidetrack/blog/deploying-our-new-game-changing-interpreter-agent-in-our-meetings-at-microsoft/ Thu, 17 Apr 2025 16:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=18870 The new Interpreter agent in Microsoft Teams meetings is transforming the lives of our employees who must speak a second language at work. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic with experts from our Microsoft Digital team. We […]

The post Deploying our new ‘game changing’ Interpreter agent in our meetings at Microsoft appeared first on Inside Track Blog.

]]>
The new Interpreter agent in Microsoft Teams meetings is transforming the lives of our employees who must speak a second language at work.

We in Microsoft Digital, our internal IT organization, have enabled Interpreter across Microsoft, providing our employees with live, real-time interpretation in their meetings.

Interpreter is especially helpful because it goes beyond regular interpretation.

With the speaker’s consent, it can simulate their own voice—helping create a more personal, natural, and inclusive experience. The result is meetings where every participant speaks and hears everything spoken in their preferred language.

A photo of Glattbach.

“This agent is going to completely change the way we—and Microsoft customers—have multilingual meetings.”

Petra Glattbach, senior business program manager, Microsoft Digital

Equally as important, speakers no longer need to struggle through sharing their thoughts in another language—they are free to speak their mind in their own language.

“I can think and speak at the speed of my first language,” says Masato Esaka, a business program manager at Microsoft, and one of the early adopters of Interpreter. “I can speak smoothly and articulate my thoughts clearly without worrying about what I sound like in English.”

Interpreter is enabling our employees to communicate with each other more effectively.

“Hearing my voice speaking Japanese for the first time was surreal,” says Petra Glattbach, a senior business program manager in Microsoft Digital. “This agent is going to completely change the way we—and Microsoft customers—have multilingual meetings.”

Communicating in a globalized world

When conversations happen outside someone’s native language, it can be challenging to communicate clearly. For global teams, that can lead to feeling less included and not fully understood in meetings.

“Historically, human interpreters have bridged the language gap in multilingual meeting scenarios, whether in person or online,” says Harin Lee, a principal product manager in the Teams product group. “However, as technology has advanced, it’s made more sense to translate using tools. It’s too difficult—and expensive—to think about having human interpreters in place for every multilingual meeting scenario that plays out in any global organization. There are too many meetings, too many languages, and not enough resources.”

Recent advances in technology have certainly helped. Many collaboration-focused apps—including Microsoft Teams—offer dynamic translation of meeting transcripts, but this didn’t resolve all the pain points. Key struggles in multilingual meetings persist:

  • Traditional translation tools often feel impersonal and too disconnected for participants to feel fully included.
  • Relying on translation tools or interpreters can introduce technical difficulties, such as poor audio quality or delays in translation. 
  • Some participants may feel less confident speaking in a non-native language, leading to reduced participation and engagement. This can result in valuable insights and perspectives being overlooked.

Introducing Interpreter in Microsoft Teams

Interpreter enables participants in Teams meetings speaking different languages to converse and hear the meeting in their preferred language with the help of an AI-based interpreter.

“It’s like being on the floor at the United Nations, but as the words are spoken in a myriad of languages, every person can easily understand everyone else,” Glattbach says. “Languages from around the world are spoken, except with Interpreter, the AI listens and speaks instead of human interpreters. If I speak French and a Japanese speaker is in the same meeting, they can set Interpreter to hear what I say in Japanese, in near real time, in my voice!”

The agent uses advanced text-to-speech (TTS) and speech-to-text (STT) capabilities developed using Microsoft Azure AI services to provide interpretation services that leverage AI-driven interpretation capabilities. The Microsoft Teams product group responsible for Interpreter has been developing the agent and testing it internally at Microsoft. It’s a perfect example of our Customer Zero approach.

“Listening to multiple people speak several different languages in the same meeting, and the AI voicing over each of them in my language, was incredible.”

Petra Glattbach, senior business program manager, Microsoft Digital

As Customer Zero, Microsoft Digital is the first adopter of almost all Microsoft products. Being Customer Zero means a deep partnership between our end users and our product engineering teams to envision the right experiences, co-develop innovative solutions, and both listen to and act on insights we gather from our employees. We work together to stay grounded in the way our employees use our products every day, so your employees can benefit from our insights.

Glattbach is appreciative of the Customer Zero experience with Interpreter.

“I’ve never said ‘wow’ so many times in a single meeting,” she says, recalling her first time in a Teams meeting where Interpreter was being used. “Listening to multiple people speak several different languages in the same meeting, and the AI voicing over each of them in my language, was incredible.”

Glattbach, senior product manager Chanda Jensen, and our full team in Microsoft Digital are responsible for deployment, testing, change management, and adoption at Microsoft. They help evangelize the products to their fellow employees so they can benefit from using them.

When the Interpreter is enabled at the tenant level, participants can use the agent where and when they desire in the meeting. Teams notifies participants when Interpreter is enabled, and participants can opt to turn it on for themself as well. When they do, they’re prompted to select their preferred language from a dropdown menu on their first time using this agent. For subsequent meetings, the system defaults to the language used in previous meetings if supported by AI interpretation, ensuring a seamless and intuitive experience for participants.

A photo of Lang.

“Privacy and impersonation are legitimate concerns. We’ve worked hard to ensure the proper controls are in place to let both the user and organization decide what they’re willing—and not willing—to share.”

Tori Lang, senior product manager, Microsoft Digital

Once it’s enabled, Interpreter allows participants to choose the language they want to hear the meeting in. The AI models, leveraging TTS and STT technologies hosted in Azure AI Services, deliver near-real-time speech-to-speech translation through an Interpreter in the meeting. Optional voice simulation provides an audio replication of the original speech, using the speaker’s voice. This means that the spoken words are translated into the chosen language with a small delay, replicating the voice of the original speaker and maintaining a conversational, natural language interaction for all meeting participants.

Participants can adjust the volume of the Interpreter in relation to the original audio of the other participants, which allows for a customized experience for each participant depending on their needs.

Turning Interpreter on

Steps for turning on Interpreter and adjusting your settings. First select “More” in your meeting nav, then select Language and speech, and then “Turn on Interpreter.”
Once there, select your interpretation language and device if you want the AI feature to simulate your voice or choose from several automated options.
Follow these steps to start using Interpreter, including turning it on (shown in the first image) and deciding which language Interpreter will speak on your behalf and choosing what voice to use, including the possibility of having AI simulate your voice (shown in the second image).

Providing governance and control

Giving everyone the ability to hear their own voice speak another language is our goal with the Interpreter, but we’re also aware of potential concerns over impersonation and using biometric data, like someone’s voice.

“Privacy and impersonation are legitimate concerns,” says Tori Lang, a senior product manager in Microsoft Digital. “We’ve worked hard to ensure the proper controls are in place to let both the user and organization decide what they’re willing—and not willing—to share.”

Lang’s team has worked with governance and security experts within Microsoft Digital to examine and address concerns about Interpreter and address those concerns with controls and transparency in the Teams app. Some of these controls include:

  • User Consent. Before simulating the voice of the speaker, consent must be obtained. If consent is not given, the default voice provided by the platform will be used.
  • Admin controls. Administrators have the ability to manage and configure Interpreter settings. They can enable or disable the agent for all participants or specific individuals during a meeting.
  • Notifications. Participants are notified when Interpreter is turned on or off. This ensures transparency and keeps everyone informed about the status of the interpretation.

“I would say our goal is to remove language barriers and enhance meeting inclusivity more broadly,” Lang says.

Our team in Microsoft Digital worked closely with representatives of our works councils and product engineering teams to ensure compliance and address concerns related to data privacy and user control.

“This partnership has created a mutual understanding of features and controls and has helped the product team to strike a balance between ease of use and compliance,” Jensen says.

Changing the game with Interpreter

Now that we’ve deployed it the full company here at Microsoft, our next step is to let employees know it’s there and to show them how powerful it is.

“Interpreter really is a game-changer,” Glattbach says. “It fundamentally transforms how users participate and engage in Teams meetings where their preferred language isn’t spoken. It democratizes language for all users at Microsoft and creates an inclusive and supportive meeting environment.”

In addition to these general scenarios, the Teams product group has identified many scenarios within Microsoft that have been transformed, or can be transformed, with this new level of inclusivity.

A photo of Jensen.

“In the foreseeable future, I can imagine not being able to hold Teams meetings without Interpreter. It is going to quickly become a critical part of the way we communicate as a global organization.”

Chanda Jensen, senior product manager, Microsoft Digital

“Interpreter helps users communicate without barriers,” says Ritika Gupta, a principal group program manager for the Microsoft Teams product group. “It’s a cost-effective option for multilingual support that applies across so many scenarios. We’re just beginning to uncover the specific use cases here at Microsoft.”

Gupta lists several use cases to which Microsoft users have applied Interpreter throughout the pilot and test deployment phases:

  • Enabling expression and comfort. In primarily bilingual workforces in countries or regions like India, China, and Japan, many employees speak and understand English but are more comfortable when they can speak and hear in their preferred language, where expression and nuanced understanding are easier.
  • Connecting with customers, stakeholders, and partners in non-English speaking markets and regions.
  • Enabling agents handling support and sales meetings to support multiple geographies and languages.
  • Supporting executives communicating and connecting with their globally dispersed team members.

Jensen, who worked closely with the product to lead our deployment of Interpreter across the company, believes we’re barely scratching the surface of how Interpreter can impact Microsoft and its customers.

“This agent is going to shape the way we communicate at Microsoft, connecting colleagues, partners, and customers worldwide so that everyone is able to speak in their preferred language. In the foreseeable future, I can imagine not being able to hold Teams meetings without Interpreter,” she says. “It is going to quickly become a critical part of the way we communicate as a global organization.”

Jensen and her team at Microsoft Digital are responsible for Customer Zero testing, feedback, and deployment for Microsoft employees. The team worked with product engineering on deploying Interpreter in a ringed model to Microsoft employees.

Investing in multilingual communication

The Customer Zero effort between the Teams product group and Microsoft Digital is constantly evolving and expanding. The team has deployed Interpreter to the full company and is currently working on extending Interpreter to include more languages and optimizing translation performance in the underlying machine learning models.

“AI has completely changed the playing field in multilingual communications,” Lee says. “This is a defining moment in communications, and we’re committed to ensuring that every participant in any Teams meeting around the world is able to express themselves and feel understood.”

Key takeaways

Here are a few ways that Interpreter can revolutionize your multilingual meetings:

  • Interpreter helps bridge language barriers, ensuring that participants can communicate effectively regardless of their native language.
  • It enables more inclusive and productive meetings by allowing everyone to follow and contribute in their preferred language.
  • The tool supports global collaboration, making it easier to work with international teams and clients without the need for professional interpreters.
  • By providing real-time interpretation, Interpreter enhances understanding and reduces the risk of miscommunication during crucial discussions.

The post Deploying our new ‘game changing’ Interpreter agent in our meetings at Microsoft appeared first on Inside Track Blog.

]]>
18870
Unlocking knowledge through intelligence: Lessons learned using SharePoint agents at Microsoft http://approjects.co.za/?big=insidetrack/blog/unlocking-knowledge-through-intelligence-lessons-learned-using-sharepoint-agents-at-microsoft/ Thu, 27 Mar 2025 16:05:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=18766 For countless organizations around the world, Microsoft SharePoint is the go-to solution for managing authoritative business content and collaborating on projects. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic with experts from our Microsoft Digital team. With the launch […]

The post Unlocking knowledge through intelligence: Lessons learned using SharePoint agents at Microsoft appeared first on Inside Track Blog.

]]>
For countless organizations around the world, Microsoft SharePoint is the go-to solution for managing authoritative business content and collaborating on projects.

With the launch of Microsoft 365 Copilot and the ability to extend its impact through agents, we saw an opportunity to roll the value of AI-powered assistants into the information-rich ecosystem of SharePoint. By infusing SharePoint with Copilot features, we’re making the search for authoritative content more accurate and more streamlined for users while giving site administrators, site editors, and content owners greater control and more opportunities to enable their colleagues.

As we’ve implemented this new kind of AI assistant internally at Microsoft, we in Microsoft Digital, the company’s IT organization, have gained first-hand knowledge of how to deploy, manage, and optimize the new capabilities—and learned key lessons that can help you use SharePoint agents to their full potential.

SharePoint in the age of AI extensibility

As a content management and collaboration platform, SharePoint is so deeply integrated into the fabric of business that it’s easy to take it for granted. Every day, people add almost two billion documents to Microsoft 365 Copilot apps (Outlook, Teams, Word, and so on). Searching through those vast quantities of content and information can be a challenge for users.

A photo of Bodhanampati.

“SharePoint agents are a way for users to ask very specific, scoped questions in order to receive authoritative answers from that specific content source. We want to add value to the user’s workflow to ultimately improve their productivity.”

Sunitha Bodhanampati, senior product manager, Microsoft Digital

One of extensibility’s core principles is using agents to bring AI capabilities into any canvas or endpoint. With the emergence of this new framework, connecting retrieval agents to the SharePoint experience was an instinctive move.

SharePoint is a natural place for agents to live—SharePoint makes enterprise content accessible to employees and agents simplify and enhance the workflows needed to that and because they make it easier to find that content.

Transforming enterprise content accessibility

At their core, SharePoint agents are about surfacing insights, scaling expertise, and powering more-informed decisions.

Every SharePoint site includes an agent scoped for the site’s content. They allow users to search a site using natural language queries like “Summarize last week’s files on benefits” or “Create an executive summary of last quarter’s sales reports.” That means people can find answers without combing through the site or wrestling with cumbersome search terms.

Here’s a rundown of the goals and benefits of SharePoint agents for different users:

A graphic outlining SharePoint agents’ value for site administrators, site owners and content editors, and site visitors.
SharePoint agents provide immense value for all SharePoint users, including site administrators, content owners, site editors, and site visitors.

“SharePoint agents are a way for users to ask very specific, scoped questions in order to receive authoritative answers from that specific content source,” says Sunitha Bodhanampati, senior product manager working on SharePoint agents with Microsoft Digital. “We want to add value to the user’s workflow to ultimately improve their productivity.”

Ready-made agents are a helpful starting point, but SharePoint agent builder introduces even more targeted capabilities. It gives site administrators, editors, and content owners the opportunity to create, customize, and control agents to provide greater assistance to their users.

A photo of Flanigan.

“For human beings, the more content you give them, the less they engage, so agents are a way to narrow that field of inquiry to make your site more helpful.”

Siobhan Flanigan, senior marketing communications manager, Microsoft Customer and Partner Solutions

In just a few clicks, anyone with SharePoint site editing permissions can create agents based on content that’s relevant to specific projects or tasks. They can customize their agent’s branding and purpose, specify the sites, pages, and files that it should access, and define customized prompts tailored to its objectives and scope.​​​​​​​ This flexibility ensures that the right people get the best possible access to content while ensuring security and adherence to governance guardrails.

Most importantly, it’s easy to create SharePoint agents. This technology isn’t just accessible to software developers. SharePoint agent builder’s inherent simplicity means that people in communications, HR, marketing, or any other role can create digital assistants in just a few minutes and a few clicks.

“We’re making knowledge accessible at a level it’s never been before,” says Siobhan Flanigan, senior marketing communications manager for Worldwide Learning in Microsoft Customer and Partner Solutions. “For human beings, the more content you give them, the less they engage, so agents are a way to narrow that field of inquiry to make your site more helpful.”

A photo of Malekar.

“There’s a lot of intelligent creation and summarization with Copilot experiences, so naturally there are fears around organizational risk from overexposure, hallucinations, or misdirections that lead to user frustration.”

Swapna Malekar, principal product manager, Microsoft 365 Copilot

Beyond SharePoint sites, employees can easily share agents via email or within Microsoft Teams chats, granting colleagues access to the same accurate and relevant information through natural language queries. Not only are coworkers able to use each other’s agents, but @mentioning the agent in a group chat setting gives the team a digital subject matter expert, ready to assist and facilitate collaboration. 

Building these capabilities and implementing them securely required extensive collaboration between the SharePoint product group and Microsoft Digital, the company’s IT organization. As the first business to implement this technology at scale, we had to be confident that it met our standards for trustworthy administration, governance, security, and responsible AI.

“With any AI-specific experience, there needs to be guardrails and governance to manage its behaviors,” says Swapna Malekar, principal product manager for Information Discovery and Experiences in Microsoft 365 Copilot. “There’s a lot of intelligent creation and summarization with Copilot experiences, so naturally there are fears around organizational risk from overexposure, hallucinations, or misdirections that lead to user frustration.”

In the simplest terms, SharePoint agents are scoped versions of Copilot Chat. As a facet of agents in Microsoft 365 apps, SharePoint agents benefit from all of the same governance controls that protect our tenants in any other Copilot-enabled context.

That alignment with pre-existing tooling and policy means that SharePoint agents respect permission-trimming when they provide responses. Because the content itself honors permissions according to Microsoft 365 Copilot governance policies, users who don’t have access to that content won’t receive it as part of the agent’s outputs.

These capabilities arose from our iterative development process and experience as an enterprise, but it’s just the beginning. In our early experiments with SharePoint agents, we’ve also developed some helpful scenarios and best practices our customers can use.

Creating agent-friendly content ecosystems in SharePoint

Early adopters here at Microsoft have already created some highly useful SharePoint agents. In the Microsoft Customer and Partner Solutions (MCAPS) business group, the Worldwide Learning team has used the following agents to support employees in specific contexts:

Ask MCAPS Academy

This agent makes it easy for learners to query the Microsoft learning catalog to find specific answers contained in our course content. For example, before a salesperson demonstrates Microsoft Fabric, they could ask for best practices without having to take an hour-long course.

Ask MCAPS Tech Connect

MCAPS Tech Connect is a strategic training event for technical field roles, designed to help them uplevel their expertise and build confidence through collaborative learning and hands-on skilling. The Ask MCAPS Tech Connect agent gives employees easy access to content from more than 70 sessions. Users ask questions about the material, and the agent retrieves Microsoft PowerPoint decks and summarizes sessions so they can determine if they want to watch full videos.

During the process of creating these agents and others, our internal site editors and administrators have developed best practices to make sure employees get the most value out of their new digital assistants. The following techniques can help you create your own agents:

  • Understand agent instructions. It’s helpful to think about creating agents with two sets of parameters: sources and behaviors. Sources are how you define the sites, folders, and content your agent will encompass. A more expansive scope will be more likely to return an answer, but that answer might be too broad. A more limited scope will provide better accuracy, but it might not have access to answers from a wider content base and therefore not return results at all. Meanwhile, behaviors are the explicit instructions and guidance you provide your agent, for example fine-tuning the structure of the summaries it delivers or specifying the technical level of responses the agent should provide.
  • Optimize your libraries for AI. Just like it’s important to structure web content for search engine optimization (SEO), it’s helpful to structure your SharePoint sites for AI optimization—what some super-users are calling “AIO.” We recommend using all available metadata to ensure content is highly available to SharePoint agents; for example, headers, meta-tags, and alt-text. File names are particularly impactful. We recommend naming a file according to the way a user is most likely to search for it, like “Q3 AI impact executive summary.” It’s also helpful to name files associated with each other in similar ways. For example, the PowerPoint presentation and recording transcript for the same conference session should have similar titles.
  • Recognize human behaviors. If site administrators and editors want to enable their users, they need to think about how to accommodate the ways they work. Plenty of employees will know to access SharePoint agents through the built-in chat, but why not provide even easier onramps? Our insiders have learned that it’s extra helpful to share agents through Microsoft Teams chats, in communications, and anywhere else people might need content support. It’s also helpful to use the UX design capabilities in SharePoint to create explicit call-to-action buttons that direct users to particular agents.

“SharePoint agents unlock and scale knowledge,” Flanigan says. “If there’s an answer locked somewhere in a content library, agents essentially turn that library into a responsive assistant, and people can ask it questions to get the information that empowers their work.”

The agentic future of enterprise knowledge

As our teams continue to experiment with SharePoint agents, they continue to find value in more accessible and authoritative knowledge. Site editors and administrators across Microsoft are eagerly seeking out advice and opportunities for more and more agents to support their organizations.

A photo of Teper.

“SharePoint revolutionized enterprise content management and collaboration once before. Now, we have an incredible opportunity to use the power of AI to help people get the information and insights they need.”

Jeff Teper, president, Microsoft 365 Collaborative Apps and Platforms

Our product teams are also extending SharePoint agents’ capabilities to amplify their impact even further. In addition to linking to agents in Microsoft Teams chats, they’ll soon be available in channels to provide AI assistants as digital liaisons for specific projects or teams.

Other, more complex features are on the way as well. These improvements will lead to even greater value, all stemming from the combination of enterprise content and AI assistance.

“SharePoint revolutionized enterprise content management and collaboration once before,” says Jeff Teper, president of Microsoft 365 Collaborative Apps and Platforms. “Now, we have an incredible opportunity to use the power of AI to help people get the information and insights they need, driving more informed decision-making, better collaboration, and more streamlined business processes.”

Key takeaways

Here are some things to think about as you consider getting started with SharePoint agents at your company:

  • Experiment with different scopes and behaviors by iterating your SharePoint agents over time.
  • Pay special attention to the metadata in your SharePoint sites and files to ensure they’re optimized for AI discoverability. This resource shares best practices for managing metadata.
  • Tailor your SharePoint agents and how you disseminate them to human needs and behaviors to encourage uptake.

Try it out

Want to start streamlining access to content for your employees? Get started with SharePoint agents here.

The post Unlocking knowledge through intelligence: Lessons learned using SharePoint agents at Microsoft appeared first on Inside Track Blog.

]]>
18766
Boosting efficiency with SharePoint agents: How our Microsoft legal team is helping clients find answers faster http://approjects.co.za/?big=insidetrack/blog/boosting-efficiency-with-sharepoint-agents-how-our-microsoft-legal-team-is-helping-clients-find-answers-faster/ Thu, 27 Feb 2025 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=18540 We all know the frustration of searching for answers we can’t find, and legal professionals often spend too much time answering the same questions repeatedly. Engage with our experts! Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic with experts from our Microsoft Digital […]

The post Boosting efficiency with SharePoint agents: How our Microsoft legal team is helping clients find answers faster appeared first on Inside Track Blog.

]]>
We all know the frustration of searching for answers we can’t find, and legal professionals often spend too much time answering the same questions repeatedly.

To address these challenges, knowledge must be captured, presented, and made accessible so that individuals can quickly find answers on their own. Our legal team supporting marketing at Microsoft developed a SharePoint agent to help achieve just that.

A photo of Nowbar smiling.
Hossein Nowbar spearheads the Microsoft AI integration and works on enhancing our legal team’s efficiency.

Over the years, our Microsoft legal team, Corporate, External, and Legal Affairs, has developed rich, comprehensive, and curated content accessible through SharePoint. This includes guidelines, policies, summaries of laws, self-service tools, and more; all presented in a way that’s understandable for a non-legal audience. The marketing section of this SharePoint site alone drives approximately 8,000 page views per month, resulting in significant cost savings.

When Microsoft released SharePoint agents, it created an opportunity to do even more. Now, the marketing legal team’s newly developed SharePoint agent sits on top of its robust SharePoint site, adding the power of AI to answer legal questions and further unlocking the value of the existing resources in an elegant and streamlined way.

SharePoint agents are natural language AI assistants tailored to specific tasks and subject matter, providing trusted and precise answers and insights to support informed decision-making. Each SharePoint site includes an agent based on the site’s content. Or, with a single click users can create and share a custom agent that accesses only the information they select. 

“At Microsoft, AI is transforming how our legal teams operate, creating new opportunities to enhance workflow efficiency,” says Hossein Nowbar, chief legal officer and corporate vice president for Microsoft. “We’ve used SharePoint agents to improve the discoverability and delivery of legal resources, scale our legal advice, and gain critical insights into content usage. This saves considerable time for teams that need advice and those that provide it, all the while driving greater legal compliance and consistency.”

Watch this demo of the SharePoint agent we built to supply the legal team’s internal clients with answers faster and more efficiently.  
A photo of Tan smiling.
CJ Tan and her team build easily customizable agents that enable the legal team and others at Microsoft to do routine work much faster and more efficiently.

To determine whether using the SharePoint agent shown in the demo was better than using search and navigation alone, the legal team ran a test consisting of six legal questions for which five participants were asked to find answers. For each question, the participants were timed using search and navigation alone, and then using the new SharePoint agent.

In timing each participant, we stopped the clock either when they were satisfied that they had found the correct answer, or at five minutes if they did not find the correct answer. In the first test, using search and navigation, participants only found the answer 83.3% of the time, leaving 16.7% of the questions unanswered. Using the SharePoint agent, participants found the correct answer 100% of the time.

Not only were participants more successful at finding correct answers, but they also found the answers much more quickly using the SharePoint agent. Participants found and confirmed the answer in under 1 minute 46.7% of the time and found and in under two minutes 100% of the time. On average, participants found the correct answers 2.97 times faster using the SharePoint agent when compared with using site search and navigation.

We know from experience and feedback that when people can find answers to their legal questions quickly and easily using self-service resources, the legal department can focus on more complex issues. A SharePoint agent is an essential tool for any organization seeking to harness the power of AI to make answers readily available, reduce the need for live support, and bring their existing content to life.

“The Microsoft Legal team was an ideal early adopter of SharePoint agents due to their well-curated content,” says CJ Tan, principal group product manager for SharePoint agents. “They recognized the value of an agent in scaling support and handling easily addressable questions, allowing the team to focus on more complex, unique business scenarios. Instead of learning how to build an agent, they could concentrate on helping marketers surface and use the right content for their business needs. As subject matter experts, they were also well-positioned to validate and test their agent before publishing it on their SharePoint site.”

Watch to see our legal team walk you through how you can create your own SharePoint agent.
A photo of Spataro smiling.
Jared Spataro empowers employees to swiftly access a vast knowledge base by integrating agents into SharePoint sites.

As we build out our array of Microsoft 365 agents, we continue to look to our internal experiences to guide the product’s evolution for our customers. We are exploring new ways for SharePoint agents to be shared and extensible across a variety of content sources. Lastly, we know that governance controls and analytics are critical as organizations introduce new features within their workflow and are excited about the roadmap for additional insights available and coming soon from Copilot Analytics, SharePoint Advanced Management, and SharePoint Purview.

“Organizations rely on SharePoint, creating more than two million sites and uploading more than two billion files daily,” says Jared Spataro, chief marketing officer of AI at Work @ Microsoft. “By giving every SharePoint site an agent, employees can quickly tap into this massive knowledge base with a single click.”

As with any new product and technology innovation, we’re focused on education and customer learnings. At the Microsoft 365 Community Conference we will host a variety of sessions on SharePoint agents, going deeper into business use cases and best practices for creation and usage.

Connect with author Brent Sanders on LinkedIn.

Key Takeaways

Here are some of our top tips for getting started with SharePoint agents at your company:

  • Prepare your content: Ensure your SharePoint content is highly curated, accurate, complete, and unique. This helps agents provide more accurate and relevant responses.​ Organize content into smaller, manageable sets to improve response accuracy (e.g., using smaller document libraries with fewer files and minimal graphics).
  • Maintain your content: Updates made to content sources are reflected in the SharePoint agent responses, so make sure that content sources are maintained. Also, be sure to regularly check that file permissions are accurate, based on the agent audience.
  • Use ready-made agents: Each SharePoint site comes with a ready-made agent scoped to the content of the site. SharePoint admins can approve this agent to help jump-start usage. Use our communication kit to help announce SharePoint agent availability and increase awareness.
  • Identify where custom SharePoint agents can add value: SharePoint agents can be grounded in specific sites, folders, or files. Collaborate with business stakeholders to identify business objectives and priorities to create specialized expert and informational agents.
  • Target no more than 20 content sources: If you are selecting a site or folder, you can have any number of files underneath. However, when selecting items individually, we recommend capping it at 20 sites, folders, or files for best results.
  • Encourage users to provide feedback: Your employees can use “thumbs up or thumbs down” to give feedback on the SharePoint agent’s response. This feedback can be used to continuously improve content and enhance response accuracy over time.
  • Measure the impact: We have a variety of analytics resources to help measure adoption and usage of SharePoint agents, including; the SharePoint document library, SharePoint Advanced Management, Microsoft Purview, and additional reports coming to Copilot Analytics.
Try it out

For organizations with at least 50 Microsoft 365 Copilot licenses, any employee in the organization will be able to create, share, and interact with SharePoint agents. Learn more about SharePoint agents.

The post Boosting efficiency with SharePoint agents: How our Microsoft legal team is helping clients find answers faster appeared first on Inside Track Blog.

]]>
18540
Keeping our network infrastructure healthy at Microsoft with an employee-built AI agent http://approjects.co.za/?big=insidetrack/blog/keeping-our-network-infrastructure-healthy-at-microsoft-with-an-employee-built-ai-agent/ Thu, 30 Jan 2025 17:00:00 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=18083 Like many global companies, our network engineering environment here at Microsoft is gigantic. It spans 88 countries, more than 700 buildings, 64,000 devices, 7,500 Microsoft Azure Virtual Networks, and nearly 150 lab sites. It’s a system that serves more than 220,000 employees and generates its fair share of service tickets, more than 170,000 per year. […]

The post Keeping our network infrastructure healthy at Microsoft with an employee-built AI agent appeared first on Inside Track Blog.

]]>
Like many global companies, our network engineering environment here at Microsoft is gigantic.

It spans 88 countries, more than 700 buildings, 64,000 devices, 7,500 Microsoft Azure Virtual Networks, and nearly 150 lab sites. It’s a system that serves more than 220,000 employees and generates its fair share of service tickets, more than 170,000 per year.

How do you keep something of that size healthy?

Joshua Green and Soundarya Tekkalakota wondered if their team in Microsoft Digital, the company’s IT organization, could build an AI agent with Microsoft 365 Copilot to help accomplish this goal. Green, an Infrastructure and Engineering Services (IES) principal software engineering manager, and Tekkalakota, an IES product manager, quickly realized that the answer was a resounding yes, if they sprinkled in a helping of artificial intelligence and machine learning.

“We essentially put an AI lens on the network engineering challenges that already existed, and that our engineering teams have been dealing with for years,” says Tekkalakota, who served as lead product manager on the effort. “We decided to use AI to enable faster gathering of information and data insights, and to identify network problems more quickly and efficiently—this would give our network engineers more time to take the human actions needed to resolve issues.”

That kicked off their AI journey, in which they and their team built a custom engine agent before eventually using the extensibility capabilities of Microsoft 365 Copilot to create a declarative AI agent. The result is Network Copilot (also known as Network Infrastructure Copilot, or “NiC”), a powerful tool that provides support for various networking and infrastructure-management tasks and helps us work toward our goal of operating the industry’s most secure and reliable enterprise network.  

Importantly, Network Copilot is another proof point in our ongoing journey to show how we’re benefitting from Microsoft 365 Copilot internally here at Microsoft.

[Learn how we’re thinking about AI agents internally at Microsoft and how to get started with them at your company.]

The spark of inspiration

Network Copilot originated in a hackathon project in early 2023, inspired by the excitement at that time around generative AI and ChatGPT. Tekkalakota pulled together a small group of AI enthusiasts and launched the effort to develop a tool that would be able to simplify network management tasks.

“These were network engineers who were at that intersection of new tech enthusiasts and experts in their particular job,” Tekkalakota says. “We leaned on them heavily in the first few iterations of the project, collecting their feedback manually on what the right queries were. And as time went on, we kept adding more and more of these enthusiastic users to help us build the community, and to test the tool and gather feedback.”

On the engineering side, the project started out with a custom-agent approach, reflecting the available technology at that point in time.

“We went with a conversational agent built on Semantic Kernel and Azure OpenAI, because that was the only option at the time,” Green says. “Over time, we switched to a declarative-agent model based on the Microsoft 365 Copilot capabilities that were being released. In a sense, Network Copilot is the story of how fast AI technology is progressing, and how it’s becoming faster and easier to develop these kinds of tools.”

Improving network services with Network Copilot

Generative AI tools excel at one of the biggest challenges that network engineers face in their day-to-day work: how to quickly track down the specific information needed to resolve a network issue.

A photo of Tekkalakota.

“It’s a great solution because it keeps them in the context of their current work. They don’t have to step out of the network lifecycle management task that they’re currently in to find answers.”

Soundarya Tekkalakota, product manager, Infrastructure and Engineering Services, Microsoft Digital

“There’s something like five to eight different steps in the network management workflow, and many of them have a manual component,” Tekkalakota says. “Network engineers drill through siloed documents like wikis and troubleshooting guides, data sources such as infrastructure data lake and incident management (IcM), and more to define data insights and documentation. We wanted to make this search faster and easier for these engineers.”

The answer was Network Copilot, an AI chat interface in which engineers can use natural-language queries to gain insights and determine recommended actions without leaving the flow of their work process.

“It’s a great solution because it keeps them in the context of their current work,” Tekkalakota says. “They don’t have to step out of the network lifecycle management task that they’re currently in to find answers. It gives them the next step in a concise, summarized manner—something that they would have to spend multiple hours tracking down outside of their context.”

The use of natural language to access network telemetry in real time is one example that Green cites when talking about how Network Copilot is transforming the way that engineers do their job.

“I can ask NiC, ‘What’s the network health of Building 32?’ and it will run a query against the network telemetry data,” he says. “Then it summarizes the results in a nice, clean report for the user, including details on risks and recommendations for that building’s network. Then the engineer can take the appropriate action.”

Transforming network engineering with a Copilot agent

Network Copilot provides the ability to summarize network health, analyze data, allow for plug-ins, summarize documentation and wikis, and generate incident ops reports.
Network Copilot was created with the flexibility to access different data sources and handle a variety of network engineering workflow tasks.

Network Copilot development journey

The initial development of Network Copilot as a custom agent meant it relied on plug-ins to give it more flexibility.

“We first built NiC in a very modular way, and all its capabilities were done with plug-ins and APIs,” Green says. “For example, we provided a library of more than 1,000 queries, which were written by the teams that know the data best (like the wireless team, which wrote queries to check the health of wireless access points). So, when Copilot is able to access that data, it can stand toe-to-toe with the network engineers because it’s able to draw on that same knowledge base.”

Then, when declarative agents were released in 2024, the development strategy shifted to take advantage of these faster, less code-heavy solutions.

“One of the things we’re always trying to do at Microsoft is provide low-code and no-code options,” Green says. “That’s what Microsoft 365 Copilot is focused on. Or you can go with full-code development, do it all yourself and have ultimate control and customization. Our journey with NiC was kind of a hybrid approach. We’re still on the journey from full code to low code; we’re not there yet.”

Overcoming the challenges of AI tool adoption

As Green, Tekkalakota, and the team began rolling out Network Copilot to larger and larger groups of network engineers, they began running into some of the challenges inherent in widespread AI tool adoption.

“The first thing was just the cultural change of our engineers building the daily habit of using the tool, because it’s not always top of mind for them,” Tekkalakota says. “It’s the stickiness factor, and that’s something we’re still working on. The other challenge was what we came to call ‘prompter’s block,’ where the engineers weren’t sure what to ask in the NiC chat, or they wouldn’t keep querying to get better results. So, we put out newsletters and did road shows to educate them on the tool and how to use it. It’s more about a larger cultural shift.”

One major takeaway from this process was that users wanted more integrated and one-click solutions for interacting with Network Copilot.

“Some of it might be contextual, where we’re able to integrate NiC on a specific tab or page or in a specific web application,” Green says. “In some cases, it could be in the form of a button they click that sends a pre-created prompt to the back end. It’s a more simplified approach, rather than just giving people a free-range chat interface where they can ask anything.”

A photo of Hughes.

“I can spend five minutes questioning [Network Copilot] like a human and get a response that includes specific data points from the actual databases. We get a huge amount of value from Network Copilot on a day-to-day basis.”

Brandon Hughes, senior service engineer, Microsoft Digital

The impact of Network Copilot

Today, Network Copilot is available to our company’s network professionals through an internal preview and is used by more than 200 network engineers. By surveying users, Tekkalakota has already been able to show that NiC has made a significant difference in terms of employee time and effort.

“We’ve found that NiC can cut the amount of time engineers take searching for documentation and insights by 20 to 25 minutes for each successful prompt,” she says. “It also drastically reduces documentation time and has cut live incidents down by 10%.”

This finding is backed up by employees such as Brandon Hughes, a senior service engineer who played an important role in developing Network Copilot.

“Being able to extract data through natural-language questions is a huge departure from having to manually write a Kusto query, which could take you a few hours to refine in order to get the exact output that you want,” Hughes says. “Whereas in NiC, I can spend five minutes questioning it like a human and get a response that includes specific data points from the actual databases. We get a huge amount of value from Network Copilot on a day-to-day basis.”

A photo of Kumar.

“We’re always working to reduce complexity, and agents take that a step further—decreasing complexity where it’s needed but allowing the full breadth of complexity when required. They’re enabling users to do things they normally wouldn’t be able to do.”

Abhishek Kumar, software engineer, Infrastructure and Engineering Services, Microsoft Digital

Hughes and others are also working on extending the capabilities of Network Copilot to handle tasks such as generating customer update emails, troubleshooting suggestions based on service ticket details, and postmortem report generation. They even hope to add the ability for NiC to analyze images of network environments and provide feedback and optimization suggestions.

Taking a wider view, agents like Network Copilot offer the ability to manage complexity and empower users to accomplish more, no matter their role.

“In general, these agents are going to make our lives easier,” says Abhishek Kumar, a software engineer who also assisted in the development of Network Copilot. “We’re always working to reduce complexity, and agents take that a step further—decreasing complexity where it’s needed but allowing the full breadth of complexity when required. They’re enabling users to do things they normally wouldn’t be able to do.”

Network Copilot and AI agents: The journey continues

Tekkalakota and Green know that, for as much as Network Copilot can do now, the team has only just scratched the surface of the full potential that AI agents have to change the way IT—and the world—works.

A photo of Green.

“It’s still early days for AI agents, and things are moving and changing extremely quickly…. The potential is great—we’re just seeing the tip of the iceberg.”

Joshua Green, principal software engineering manager, Infrastructure and Engineering Services, Microsoft Digital

“I think we’re one of the earlier efforts at Microsoft to build an AI agent, figuring out what skills it needs to have and then building them,” Tekkalakota says. “The next steps are to build on the agent capabilities that it already has, adding things like monitoring or predictive alerting. Then, eventually be able to connect to other agents; having a connected experience between Copilot agents is the uber goal.”

Green emphasizes that when it comes to AI, the pace of change is remarkable.

“It’s still early days for AI agents, and things are moving and changing extremely quickly,” Green says. “What we did with Network Copilot was kind of like building a foundation. Now we’re working on adding more capabilities. The potential is great—we’re just seeing the tip of the iceberg.”

Key takeaways

We learned some important lessons while developing Network Copilot that you can draw on when creating your own AI agent solutions, including:

  • The team found it most effective to slowly build a community of enthusiastic users, continually soliciting feedback and ideas for improvements from these early adopters.
  • Users expect an AI agent to “just work” with one prompt. Query debugging features (“Help me with this error”) and contextual prompts encourage users to engage in a conversation to generate the information they need.
  • Users want the AI agent to know everything that their team knows. The Network Copilot team continues to expand the tool’s knowledge base with additional troubleshooting documents, network config files, and data sources
  • It’s helpful if the agent is accessible from the UI the users are already in, so the team is working on an embedded Network Copilot experience in their custom web apps that offers buttons for commonly used functions.
  • Frequently requested use cases for Network Copilot include network device deployment failure remediation, network health and inventory, troubleshooting, and log monitoring for anomalies.
  • Technology moves fast. The team built Network Copilot in a modularized way (using plug-ins and APIs) so that they could adjust to the latest AI capabilities as they were released.
  • Follow best practices for accessing data from external sources, ensuring that your data is secure and sensitive information isn’t exposed.

The post Keeping our network infrastructure healthy at Microsoft with an employee-built AI agent appeared first on Inside Track Blog.

]]>
18083
Empowerment with good governance: How our citizen developers get the most out of the Microsoft Power Platform http://approjects.co.za/?big=insidetrack/blog/empowerment-with-good-governance-how-our-citizen-developers-get-the-most-out-of-the-microsoft-power-platform/ Thu, 16 Nov 2023 01:12:18 +0000 http://approjects.co.za/?big=insidetrack/blog/?p=12576 What if every employee, no matter their technical expertise or job description, had the power to use software development to create their own solutions? Imagine the kind of collective creativity that could arise if any of your employees could be citizen developers. That’s exactly the promise of citizen development through low-code/no-code solutions augmented by AI. […]

The post Empowerment with good governance: How our citizen developers get the most out of the Microsoft Power Platform appeared first on Inside Track Blog.

]]>
Microsoft Digital storiesWhat if every employee, no matter their technical expertise or job description, had the power to use software development to create their own solutions? Imagine the kind of collective creativity that could arise if any of your employees could be citizen developers.

That’s exactly the promise of citizen development through low-code/no-code solutions augmented by AI. Throughout our organization, we’re empowering all kinds of employees—not just developers—to create their own business solutions and services using our citizen development toolkit, the Microsoft Power Platform.

Low-code/no-code puts development tools in the hands of people who aren’t technical developers or don’t have well-resourced software engineering teams.

—Lianne Zelsman, product manager, Power Platform governance

At Microsoft Digital, the company’s IT organization, we’re enabling citizen development internally by encouraging and enabling our employees to become citizen developers while also making sure we put guardrails in place to protect the company.

[Unpack how a revamped Microsoft business intelligence platform boosts data handling and builds trust. Discover powering decision making at Microsoft by analyzing data with Microsoft Power BI. Explore building a content management system at Microsoft with Microsoft Power Platform.]

The promise of citizen development

There are plenty of circumstances when someone needs a process, tool, or service to support their work but can’t access the formal engineering resources to create it. In those cases, it makes sense for our employees to build something for themselves. Historically, software developers or engineers would code their own solutions, while people without those skills were out of luck.

“Low-code/no-code puts development tools in the hands of people who aren’t technical developers or don’t have well-resourced software engineering teams,” says Lianne Zelsman, product manager in charge of Power Platform governance within Microsoft Digital. “For us, this means enabling our people in HR, Finance, and other teams to build solutions with the Power Platform. They can use it to do a whole range of things—from implementing automations to building their own apps—with very little ramp-up time or expertise.”

If you work in a large organization, you know that much of any employee’s day gets eaten up by mundane or menial tasks. Those are just the kinds of things that simple hand-made automations or apps can handle.

“We’ve seen a lot of Power Platform usage for project management teams that need to streamline their workflows or email communications,” says Bert Byerly, solution manager for Power Platform and Microsoft Fabric. “Even our developers who write code as their regular job are now using our low-code/no-code platform to spin things up quickly, like automations and alerts.”

Zelsman, Raz, Johnson, and Byerly pose for pictures assembled into a collage.
Lianne Zelsman, Zohar Raz, David Johnson, and Bert Byerly are part of a cross-disciplinary team helping to unlock citizen development and ensure proper governance.

More about the Microsoft Power Platform

At Microsoft, we believe in empowering our teams with tools that make their lives easier and their work more innovative. The Microsoft Power Platform is our low-code/no-code development solution that helps everyday employees turn great ideas into impactful tools. And because the technical professionals within Microsoft Digital have put the platform through its paces internally as Customer Zero, we’ve been able to add all kinds of features and functionality that better serve our customers.

“Power Platform comes with around 1,100 out-of-the-box connectors,” says Zohar Raz, group product manager for Power Platform governance. “This gives you the ability to build custom connectors that you can use to link up with any data source on the planet.”

We’re infusing Microsoft 365 Copilot into the platform so you can use AI to convert your natural language queries into solutions.

—Zohar Raz, group product manager, Power Platform governance

It also gives you access to a business layer called Dataverse that makes it easy for you to create and run thousands of solutions on top of your data layer.

And we’re also adding AI into the mix.

“We’re infusing Microsoft 365 Copilot into the platform so you can use AI to convert your natural language queries into solutions,” Raz says.

Giving your employees all this new richness and power is great, but we also recognize that you’ll want to govern and guide this usage. In response, we’ve made a lot of investments to give customers that kind of flexibility.

“The product gives organizations a lot of visibility and control and empowers them to determine which connectors and functionality to enable where,” Raz says.

As part of our overall technology stack, Power Platform plays very well with other Microsoft tools and platforms. That means organizations that use the Microsoft ecosystem benefit even more.

“For me, the greatest value comes from integration because Microsoft has such a wide-ranging product suite,” Zelsman says. “Building Power BI reports off of Power Platform assets, pulling SharePoint information into your Outlook, or automating reminders in Teams is really simple.”

Understanding the risks of low-code/no-code development

Along with the benefits, opening the development process to non-technical professionals presents certain risks.

“Enablement can be a double-edged sword,” says David Johnson, tenant and compliance architect with Microsoft Digital. “You’re empowering employees to be successful, but at the same time, you’re effectively creating applications that get used for business purposes without IT oversight, without security oversight, without privacy oversight—just an employee putting something together on their own.”

In deeply connected environments that have the power to extend company data outwards, that can be a dangerous situation if we don’t properly control it.

“The risk is mostly around data leaks,” Raz says. “This kind of technology works really well for good actors, but it also opens up opportunities for bad actors to find nuggets they can use to hurt the company.”

Addressing the risks through technology and governance policy

Two factors help us limit the risks associated with citizen development at Microsoft: the technology of Power Platform itself and the governance efforts of our Microsoft Digital team.

As a well-integrated piece of Microsoft technology, Power Platform gives IT a lot of control at the platform level to govern what people do in their individual apps.

—David Johnson, tenant and compliance architect, Microsoft Digital

“What Power Platform has done well for us is give us the control to lock things down tight,” Byerly says. “And then, as we look at different features or connectors and their interactions, we can start to loosen things up and create policies so they’re safe to use.”

That control is part of the core functionality of Power Platform.

”As a well-integrated piece of Microsoft technology, Power Platform gives IT a lot of control at the platform level to govern what people do in their individual apps,” Johnson says.

As a result, Power Platform enables a robust compliance strategy. Building and deploying that strategy has been a collaborative effort between Microsoft Digital’s governance professionals, the Power Platform product team, and the Microsoft Data-Loss Prevention team.

Our overall governance strategy breaks down into three sets of activities: Protect, measure, and enforce. Within this strategy, we divide our efforts between the macro-level, which sets policies for the overall tenant, and the micro-level, where individual groups within Microsoft can apply governance policies that complement our all-up guardrails.

The Microsoft citizen development governance strategy featuring three pillars: Protect, measure, and govern.
Our approach to citizen development governance hinges on a “Protect, measure, enforce” model that provides both guardrails and agency for our employees.

“We’re forever finding the right balance between empowerment and safety,” Zelsman says. “So a lot of what we do is risk-based, essentially giving everything an internal risk rating, and that’s going to generate the scope of the compliance requirements any employee-developed solution will have to go through.”

That means we have to break our governance efforts down into tiers where we apply policies to employee-created solutions based on their risk profile and then channel them through permission reviews. For example, simple connectors associated with Microsoft Teams or SharePoint that operate in the Microsoft Personal Productivity environment need no permissions before pushing to production. On the other hand, a Dataverse connector built in the Microsoft Pro Dev environment requires an employee to request permission to access that environment or to change their environment before going live.

And of course, you can’t govern what you can’t see, so our teams have set up a thorough oversight apparatus to support these efforts. There’s a comprehensive tenant inventory, a reporting suite, cost and utilization monitoring, and compliance telemetry.

All of these governance policies aren’t meant to hinder citizen development, but help it move forward safely and quickly.

“Ultimately, good governance is employee empowerment with guardrails,” Johnson says.

Power Platform success stories

Microsoft Power Platform logos with their titles, including Power BI, Power Apps, Power Pages, Power Automate, and Power Virtual Agents.
The Microsoft Power Platform connects its customizable tools to Microsoft 365, Microsoft Dynamics 365, Microsoft Azure, and hundreds of other apps to help citizen developers build end-to-end business solutions.

Thanks to our enablement and governance activities, low-code/no-code development is spreading rapidly across Microsoft. Recently, we crossed the threshold of 1 million Power Platform citizen development assets within the internal ecosystem at Microsoft—and that number continues to rise. All told, our employees have built more than 18,000 environments, 170,000 Power Apps, 50,000 Power Automate flows, and 1,200 chatbots.

But the numbers aren’t the whole story. The variety and creativity our employees have developed continues to increase.

We’re definitely looking to push this technology further. There are so many different data sources for us to analyze and so many workflows we can support.

—Bert Byerly, solution manager, Power Platform and Microsoft Fabric

Here are a couple of examples of important experiences our citizen developers have built internally at Microsoft:

  • Cosmic: A revenue processing tool featuring data capture via optical character recognition (OCR), data validation through a business rules engine, and data entry using robotic process automation (RPA) that has yielded around $14.2 million annually in savings.
  • AV design standards: A Real Estate and Facilities team app that helps configure AV equipment across 16,000 Microsoft conference rooms worldwide by simplifying the equipment ordering process.

And we’re only just beginning this journey. With so many connectors and the portfolio expanding every day, the possibilities are endless.

”We’re definitely looking to push this technology further,” Byerly says. “There are so many different data sources for us to analyze and so many workflows we can support.”

As this journey unfolds, we’ll continue to see Microsoft employees flexing their creativity and innovation through accessible citizen development with Power Platform.
Key Takeaways
Here are some tips for getting started with citizen development and the Power Platform at your company:

  • Start with simple wins like automating approval flows to build up your users’ confidence.
  • Establish some very secure baseline defaults to act as controls, then expand from there.
  • Understand that your lines of business will adopt this technology on their own, so it’s best to guide their adoption through guided enablement.
  • Take advantage of training material from Microsoft, especially the Microsoft Power Platform Center of Excellence Starter Kit.
  • Put thought into your environment and tenant architecture, key personas, and scenarios before adoption.
  • Identify the security needs and regulatory compliance that are specific to your organization and use built-in governance controls available for Dataverse for Teams and Personal Developer environments.
  • Don’t reinvent the wheel: Use the open APIs and connectors that Microsoft already offers.

Try it out

Try the Microsoft Power Platform at your company.
Related links

We'd like to hear from you!

Want more information? Email us and include a link to this story and we’ll get back to you.

Please share your feedback with us—take our survey and let us know what kind of content is most useful to you.

The post Empowerment with good governance: How our citizen developers get the most out of the Microsoft Power Platform appeared first on Inside Track Blog.

]]>
12576