Extensibility Archives | Microsoft Copilot Blog http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/cs-topic/extensibility/ Tue, 07 Apr 2026 21:55:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 New and improved: Multi-agent orchestration, connected experiences, and faster prompt iteration http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/new-and-improved-multi-agent-orchestration-connected-experiences-and-faster-prompt-iteration/ Wed, 01 Apr 2026 16:00:00 +0000 Learn what's new in Copilot Studio: Multi-agent systems are now generally available, plus recent updates to the Prompt Editor and governance controls.

The post New and improved: Multi-agent orchestration, connected experiences, and faster prompt iteration appeared first on Microsoft Copilot Blog.

]]>
Microsoft Copilot Studio helps organizations move beyond isolated AI experiences and build connected systems of agents that can scale, adapt, and deliver real business value. Recent enhancements focus on making it easier for agents to work together across tools and data sources, while giving makers more control over how those agents behave in production.

What you’ll see this month: New generally available capabilities for multi-agent coordination across Microsoft Fabric, the Microsoft 365 Agents SDK, and open Agent-to-Agent (A2A) protocols—all of which help agents collaborate across your ecosystem and perform more valuable work. Plus, you’ll find updates to prompt authoring, model choice, and governance controls that can help make it faster to build and refine high-quality agent experiences with confidence.

Agents that work together across your entire ecosystem

The challenge in scaling AI inside an organization isn’t creating a useful agent. It’s about getting many agents—across teams and tools—to work together in a way that’s reliable and repeatable.

In many organizations, data teams might build one kind of agent, app teams another, and productivity teams yet another. Each agent can be valuable on its own, but once a workflow needs knowledge from one system, reasoning from another, and action in a third—teams often run into brittle handoffs and custom integration work. This slows agent adoption and makes it harder to move from promising pilots to real business impact.

This month, Copilot Studio takes a meaningful step forward: several multi-agent capabilities are rolling out to general availability over the next few weeks, giving your teams new ways to connect and orchestrate agents across your ecosystem. These updates include Microsoft Fabric integration, Microsoft 365 Agents SDK orchestration, and Agent-to-Agent (A2A) communication—all designed to help your agents operate together as a coordinated system rather than in isolated silos.

Multi-agent support for Microsoft Fabric

With multi-agent support, your Copilot Studio agents can work with Fabric agents to reason over enterprise data and analytics at scale. That means you can connect business-facing agent experiences more directly to the data estate they already rely on, without treating every data-intensive scenario like a one-off engineering project. Instead of working with limited or disconnected data, these agents will be able to operate with full business context—helping make their outputs more accurate, relevant, and actionable.

Multi-agent support for the Microsoft 365 Agents SDK

Using the Microsoft 365 Agents SDK, teams can now orchestrate Copilot Studio agents alongside agents built for Microsoft 365 experiences. Instead of recreating the same logic across multiple agents (think retrieving data, applying business rules, or completing common tasks), you’ll be able to reuse and combine existing capabilities. This makes it easier to compose cross-app workflows from what’s already been built, reducing duplication and keeping experiences more efficient and consistent.

Agent-to-Agent (A2A) support

With A2A support, Copilot Studio agents can directly communicate with and delegate work to other agents—first-party, second-party, or third-party—using an open protocol that allows universal access. This matters because the future of enterprise AI will not belong to a single stack. Organizations need to build agents on platforms that can participate in a broader ecosystem, not just operate within one product boundary. Copilot Studio A2A provides that interoperability and power.

The impact of multi-agent systems

We’ve already seen the power of this approach with the Ask Microsoft web agent, one of our early “customer zero” implementations. As site traffic and knowledge sources grew, the single-agent architecture began to strain, creating slower response times. Using Copilot Studio, the team upgraded the agent to a modern architecture with generative orchestration and multi-agent coordination.

Now, multiple sub-agents handle different parts of the site—Microsoft Azure, Microsoft 365, pricing, trials, and more—while the main agent orchestrates them to provide fast, coherent, multi-turn responses. This setup allows Ask Microsoft to answer complex questions involving multiple products or services, and to tailor responses based on where the customer is on the site.

Building a more advanced assistant with Copilot Studio has meaningfully raised the bar for our customer experience and enabled us to scale faster across products to deliver real business impact

Alyse Muttera, Director of eCommerce Programs at Microsoft

To show how this approach works in other organizations, consider a common scenario at a bank. The loan department has one agent handling mortgage applications, while the banking department runs a separate agent for account inquiries. A customer, however, expects a single seamless experience.

Multi-agent orchestration lets each specialized agent manage its area of expertise while coordinating responses behind the scenes. For instance, if a customer asks about a mortgage payment and their account balance in the same interaction, the system delivers a cohesive, context-aware answer that combines insights from both agents—no juggling multiple interfaces required.

When specialized agents work together behind the scenes, customers can get a unified experience and employees can get time back.

That’s exactly the kind of impact Coca‑Cola Beverages Africa is realizing today by using Copilot Studio agents and Microsoft Dynamics 365 to autonomously run planning cycles and automate workflows end to end, saving planners 1 to 1.5 hours every day.

These features will be fully available to all eligible customers as of April 2026. Three capabilities, one outcome: agents that can operate more like a system and less like a collection of disconnected point solutions.

Build prompts faster while maintaining control

As agent experiences grow more sophisticated, the quality of the prompt an agent maker uses matters more. A great prompt yields more powerful results from agents than a good prompt, and fine-tuning prompts is key to unlocking them.

But in practice, prompt iteration has historically felt disjointed and slow. Makers previously balanced their flow of work with jumping into a separate editor, making a small change, testing it, and then repeating the process again. That friction can add up quickly, especially when teams are tuning prompts for specialized business scenarios.

The new immersive Prompt Builder, now generally available, helps reduce that friction by bringing prompt editing directly into each agent’s Tools tab. You can update instructions, switch models, add inputs or knowledge, and test changes—all in one place. Instead of breaking context every time you want to refine an agent’s behavior, you can iterate while staying grounded in the agent you’re building.

This matters most in real-world scenarios where prompt behavior is tied to domain knowledge and policy nuance. For example, a team building an agent to support clinical documentation might need to refine instructions, swap in a better knowledge source, and test outputs against terminology that is common in healthcare but more likely to trigger default safeguards. Doing that from one workspace can make iteration faster and help lower the effort required to get a production-ready result.

More options for prompts: Content moderation and model choice

Speaking of triggering default safeguards, Copilot Studio has also added content moderation settings for prompts, now generally available in supported regions. This gives makers more control over harmful content sensitivity on managed models, including turning down that sensitivity to help unblock legitimate scenarios in industries like healthcare, insurance, and law enforcement, where default settings may be overly restrictive for the content being processed.

For even more control over prompts, the Prompt Tool now supports Anthropic Claude Opus 4.6 and Claude Sonnet 4.5 in paid experimental preview in the United States. That gives makers more choice in matching the right model to the right prompt, rather than forcing every scenario into the same tradeoff profile. This feature is great for teams that want more flexibility in how they balance performance, reasoning depth, and cost.

All together, these improvements help teams move faster on prompt iteration while maintaining the control and flexibility required in production scenarios.

What else is new and improved in Copilot Studio

We have also recently released several additional updates across automation, meetings, retrieval quality, and model support.

  • ServiceNow and Azure DevOps connector quality improvements are now generally available. These help agents better understand operational questions, retrieve the right ticket or work item data, and return more complete, actionable answers automatically.
  • Evaluation automation APIs are now generally available through Microsoft Power Platform APIs and connectors. These APIs help make it easier to run evaluations programmatically and integrate quality checks into continuous integration and continuous delivery (CI/CD) workflows.
  • Agents for Microsoft Teams meetings can now access real-time meeting transcripts and group chat. This supports scenarios like answering questions during the meeting, surfacing relevant information, or helping track decisions and follow-ups as they happen.
  • Model context protocol (MCP) apps and Apps SDK support have expanded how agents connect to your external work apps, helping to make it easier to integrate business systems and enable agents to take action across your broader ecosystem—not just respond with information.
  • Additional model support, including Grok 4.1 Fast, GPT-5.3 Thinking, and GPT-5.4 Instant in paid experimental preview, gives makers more options as they tune experiences for speed, cost, and capability.

Overall, these updates reflect a continuing broader shift in Copilot Studio: moving from building individual AI experiences to building connected, governed systems that can fit more naturally into how work already happens. As you scale up your organization’s use of multi-agent ecosystems, these will help your teams reach further across channels and knowledge sources to more accurately fulfill your business needs.

Stay up to date on all things Copilot Studio

More is coming in April 2026 across voice channels, workflows, and the building experience. Check out all the updates as we ship them, as well as new features releasing in the next few months here: What’s new in Microsoft Copilot Studio.

To learn more about Microsoft Copilot Studio and how it can transform productivity within your organization, visit the Copilot Studio website or sign up for our free trial today.

The post New and improved: Multi-agent orchestration, connected experiences, and faster prompt iteration appeared first on Microsoft Copilot Blog.

]]>
Enable agents to bring apps into the flow of work—while keeping IT in control http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/enable-agents-to-bring-apps-into-the-flow-of-work-while-keeping-it-in-control/ Mon, 09 Mar 2026 13:00:00 +0000 Stop switching tabs: agents now let you act inside approved apps from chat in Copilot, with controls that help IT teams manage risk and usage.

The post Enable agents to bring apps into the flow of work—while keeping IT in control appeared first on Microsoft Copilot Blog.

]]>
A seller needs to log a new opportunity. A manager wants to approve a request. A marketer has to update a campaign asset. Until today, these actions often meant taking insights from Microsoft 365 Copilot and switching tabs. Agents can now change that: helping people take action in their go-to work apps, without needing to leave chat in Copilot.

But enabling this kind of capability raises real questions for IT: What risks do these agents introduce? Are they actually being used? And are they behaving as expected?

The more agents you launch and the more powerful these agents are, the more these answers matter. That’s why we’re introducing three new capabilities across Copilot and Microsoft Copilot Studio that help people move work forward faster—while keeping IT firmly in control:

  1. Enhanced agents that bring apps directly into chat in Copilot
  2. New ways for employees to find the right agent, fast
  3. Tools to continuously evaluate agent quality over time

With these capabilities, employees can use their go-to business apps directly in Copilot and get a simpler way to discover the right agents for their tasks. Meanwhile, IT gains objective signals that help validate agent behavior as usage expands. Here’s what you need to know.

Interacting with apps through chat in Copilot

Today, the gap between AI insight and in-app execution starts to close—without IT needing to relax standards or introduce new risk vectors.

When an employee prompts Copilot and calls an agent connected to an approved app, that agent can bring that app’s interactive experience directly into the conversation. From there, the employee stays in the driver’s seat, using chat in Copilot to take real, in‑app actions such as:

  • Scheduling a new event in Outlook
  • Adding a new sales opportunity to Dynamics 365 Sales
  • Creating or editing a flyer in Adobe Express
  • Completing an approval form via Microsoft Power Apps

All of this happens without needing to leave Copilot. Employees interact with the app directly in chat or use follow-up prompts to carry out work in the app.

Get started quickly with pre-built app experiences

This month, we’re launching support for a focused set of early experiences, including:

  • Microsoft apps, such as Outlook, Dynamics 365 Customer Service (public preview by early April), and Dynamics 365 Sales (public preview by early April)
  • Custom line-of-business apps built with Power Apps (public preview this March)

Take Outlook, for example. You can now tell Copilot who you want to meet with, and it’ll find time slots that work. Simply select one, and an agent will schedule that time together. This experience is currently generally available (GA). Similarly, you can ask Copilot to draft an email on your behalf, edit it, and hit send—without leaving the chat (currently in Frontier).

We will also introduce in-chat experiences for a handful of Microsoft partner apps, including Adobe Express, Adobe Acrobat, Base44, Box, Canva, Coursera, Figma, Miro, Monday.com, Optimizely, and Wix. All pre-built partner app experiences will be available via the Microsoft 365 Agent Store by mid-April.

“With the Figma app in Copilot, you can turn conversations into AI-generated FigJam diagrams to take ideas further,” says Brendan O’Driscoll, Figma’s VP of Product. “By connecting Figma with your favorite tools, it’s easier than ever to visualize, iterate, and collaborate with your entire team.”

Build the app experiences your team needs

You’re not limited to the apps we ship out of the box. Your team can build agents in Copilot that work with the mission-critical apps that your systems, processes, and workflows depend on.

Under the hood, two open extensibility standards make this possible: MCP Apps and the OpenAI Apps SDK. Both give development teams a structured way to connect the apps your organization relies on to agents in Copilot—so those apps can surface interactive experiences directly in chat. Agents built with either standard use familiar development patterns, so your team can build and iterate without requiring a steep learning curve.

MCP Apps and Apps SDK will roll out to GA on web and desktop later this month, with mobile following this spring. Share the Apps SDK and MCP Apps technical documentation with your development team to get started.

Get to know the IT controls

Even as agents become more powerful, we’ve designed this experience with governance in mind. Agents with interactive app experiences use the same governance and admin patterns you already trust for agents in Copilot, keeping IT control the top priority.

You decide which agents are available in your tenant, and who can use them—globally, per agent, or for specific departments. Each agent operates strictly within existing app permissions and identity boundaries, so you can enable richer experiences in Copilot without opening new, unmanaged entry points into your environment.

All agents can be monitored end‑to‑end using Agent 365—a unified control plane that gives IT a single place to see which agents are live, where they can act, and how they’re being used. With it, you can control how agents are provisioned and scoped before rolling out this new experience broadly. Learn how to provision your organization’s agents at scale.

Empowering employees to find the right agent fast

As agents in Microsoft 365 Copilot become more capable, employees need a reliable way to find the right agent for the task at hand. But when dozens of agents are available, employees shouldn’t have to know which one to use when. Agent Recommendations (generally available) surfaces the right agent at the right moment, directly in the flow of work.

When users prompt Microsoft 365 Copilot, the system analyzes their intent and suggests an agent that’s already installed and approved by IT. No special syntax or prompt engineering required.

These recommendations are assistive, meaning employees can choose to start a new conversation with the suggested agent or continue in their current chat. All the while, discoverability only happens within known, governed boundaries —mitigating the introduction of new risks. This helps employees quickly find agents purpose-built for the scenario at hand, while IT maintains a consistent governance model as usage expands.

Holding agents to your organization’s standards

As organizations rely on more agents for more impactful work, quality and reliability stop being nice‑to‑haves—they’re essential. Small changes to prompts, models, or data can introduce drift that can be hard to detect, especially as agent usage expands across teams and scenarios.

Agent Evaluations in Microsoft Copilot Studio (currently in public preview) gives you a structured way to answer the question: Is this agent actually doing what it’s supposed to do?

Evals work by running agents against authentic questions and scenarios, then generating objective scores for accuracy and intent alignment—so quality isn’t just assumed; it’s measured. By comparing results over time, teams can help catch regressions earlier, validate improvements, and apply a consistent quality bar before agents reach broader use.

These signals reinforce that agents aren’t set‑and‑forget automation; they’re managed enterprise workloads. With objective evidence in hand, IT and makers can make informed rollout decisions and scale agent usage more confidently, knowing behavior is monitored, and reliability can be improved as usage grows.

Learn how to set up Agent Evals in Microsoft Copilot Studio, so you can assess agent quality and readiness before expanding usage.

Make agents more capable while staying in control

Support for apps in agents, Agent Recommendations, and Agent Evals are designed to work together as a system, helping organizations move faster—without compromising trust. By treating agents as first‑class, governed workloads, IT teams can enable more capable agents while maintaining the control their organizations expect.

To get started:

  • Learn how dev teams build with Apps SDK and MCP Apps
  • Control agents from end-to-end with Agent 365
  • Discover how to configure Agent Evals

The post Enable agents to bring apps into the flow of work—while keeping IT in control appeared first on Microsoft Copilot Blog.

]]>
Computer-using agents now deliver more secure UI automation at scale http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/computer-using-agents-now-deliver-more-secure-ui-automation-at-scale/ Tue, 24 Feb 2026 17:00:00 +0000 See how new updates to computer‑using agents improve UI automation with secure credentials, detailed monitoring, and scalable Cloud PC capacity.

The post Computer-using agents now deliver more secure UI automation at scale appeared first on Microsoft Copilot Blog.

]]>
When we first introduced computer-using agents (CUAs)—AI systems that can see, understand, and act across web and desktop apps—we showed what was possible: AI that works across applications, just like a person would. Early adopters quickly put CUAs to work automating brittle processes, navigating legacy systems, and stitching together workflows where APIs don’t exist.

Then, customers like you pushed us further.

You told us where agents didn’t scale, where authentication slowed runs, and where it was hard to understand why something failed—or to prove it behaved correctly. You also told us where your organization needed more control, visibility, and flexibility before rolling out computer‑using agents at scale.

Today’s updates are a direct response to that feedback.

Computer‑using agents in Microsoft Copilot Studio now offer more model choice, stronger security and governance, and easier scale—so you can automate more of your work across web and desktop apps with confidence.

Here’s what’s new with computer use—and why it matters.

Choose the right model to navigate dynamic interfaces

Computer-using agents now support multiple foundation models, including Anthropic’s Claude Sonnet 4.5 alongside OpenAI’s Computer-Using Agent. This gives you the flexibility to choose the best fit for each agent, based on the interface and the task.

  • Use OpenAI Computer-Using Agent to orchestrate multi‑step web and desktop flows.
  • Opt for Anthropic Claude Sonnet 4.5 when you need high performance reasoning on dynamic user interfaces (UIs) and interpretation of dense, changing dashboards.

Secure authentication with built in credentials and Azure Key Vault

Authentication shouldn’t be the reason automations stall. Computer use now offers built‑in credentials so agents can:

  • Securely perform website and desktop app logins.
  • Reuse them across multiple agents and automations.
  • Eliminate manual login prompts during runs, enabling unattended execution.

For example, if an agent needs to log into a vendor portal and update a desktop ERP every night, built-in credentials now let the agent authenticate to both the web portal and the desktop app automatically. This removes manual interruptions and makes overnight processing dependable while maintaining governance controls. No need to babysit “unattended” runs.

You can choose between two storage options aligned to your governance needs: internal storage (encrypted in Microsoft Power Platform) for low-friction setup, or Azure Key Vault for enterprise-grade secret management.

Credentials are encrypted and are never exposed to the AI model, so only authorized agents can access them. This way, your security and compliance team can feel confident scaling CUAs to more scenarios.

See every computer-using agent action with session replay and audit logs

As agents touch more business‑critical systems, teams need to know what happened, why it happened, and where.

Computer use now has advanced monitoring and richer observability, so operations, security, and compliance teams can inspect behavior step‑by‑step. This includes:

  • Session replay with screenshots.
  • Step‑by‑step action logs with action types, coordinates, timestamps, and context.
  • Run summaries instruction text, duration, action counts, average time per action, and human escalation counts.
  • Resource tracking including websites, desktop apps, credentials used.
  • Export options for offline review.

But what does this look like in practice? Imagine an agent run produces an unexpected update, and your team can’t tell whether the agent misread the UI, clicked the wrong control, or encountered a hidden pop‑up.

Session replay and action logs now show exactly what the agent saw and did, pinpoint the step where the UI changed, and produce an exportable record for audit review. That way, you can fix issues faster and retain a defensible compliance trail.

Beyond the monitoring pane, compliance is further strengthened through:

  • Microsoft Purview integration, sending audit logs to Purview.
  • Dataverse logging with configurable verbosity—choose All data, Data without screenshots, or Minimal.
  • Retention options from 7 days to indefinite, to match regulatory and governance requirements.

Simplify infrastructure with managed Cloud PCs for computer-using agents

Scaling UI automation shouldn’t require managing fleets of desktops or fragile virtual machines. The new Cloud PC pool, powered by Windows 365 for Agents, provides fully managed cloud‑hosted machines that are Microsoft Entra joined and Intune enrolled, designed for computer use runs and built to scale with demand.

In other words, these Cloud PC pools provide managed capacity for high-volume runs when demand spikes—without the overhead of keeping dedicated hardware patched, available, and idle the rest of the time. This way, your team can handle spikes without over-provisioning hardware.

Note: For evaluation, you can create up to two Cloud PC pools per tenant with 50 hours of free usage for published autonomous agents—making it easier to pilot CUAs at scale before broader rollout.

Extend—don’t replace—your automation

If you’ve built automations with Microsoft Power Automate and RPA, computer use expands what you can automate—especially when:

  • Interfaces change frequently
  • APIs aren’t available
  • Decision logic becomes more complex

Thankfully, you can keep classic RPA for deterministic scenarios with stable interfaces. CUAs then add flexibility and adaptive reasoning where RPA falls short (such as dynamic web apps, shifting layouts, or complex decisioning). After all, the goal isn’t to start over—it’s to modernize and extend what you already have.

For example, say you have an RPA bot that depends on fixed selectors. Historically, it broke each time a web form changed, forcing constant script updates.

Now, the RPA stays the same, while a CUA handles the variable UI portions—navigating changing layouts, interpreting dialogs, and escalating edge cases. The result? Reduced maintenance and improved reliability.

Get started and help shape what comes next

Ready to try computer‑using agents in a US‑based Copilot Studio environment?

  1. Create or open an agent in Microsoft Copilot Studio.
  2. Go to Tools → Add tool → New tool and select computer use.
  3. Describe the task you want the agent to perform in natural language.
  4. (Optional) Choose a model, configure built‑in credentials, and set up a Cloud PC pool for secure, scalable runs.

For deeper guidance, configuration details, and best practices, see the computer use documentation.

Before you go: We’re actively investing in advanced governance, operations, and scale for CUAs—and customer feedback directly informs the roadmap. Tell us what you think of the latest CUA updates today:

The post Computer-using agents now deliver more secure UI automation at scale appeared first on Microsoft Copilot Blog.

]]>
More choice, more flexibility: xAI Grok 4.1 Fast now available in Microsoft Copilot Studio http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/more-choice-more-flexibility-xai-grok-4-1-fast-now-available-in-microsoft-copilot-studio/ Thu, 19 Feb 2026 17:30:00 +0000 xAI models are now available in Copilot Studio, expanding your multi‑model lineup with a new option for fast reasoning and flexible agent design.

The post More choice, more flexibility: xAI Grok 4.1 Fast now available in Microsoft Copilot Studio appeared first on Microsoft Copilot Blog.

]]>
Starting today, xAI joins Microsoft Copilot Studio’s growing model provider lineup. Once enabled by organization administrators, United States-based makers can build with Grok 4.1 Fast and tap into deeper model choice, with readiness evaluations underway for other regions.

Grok 4.1 Fast is a fast‑reasoning, text‑generation model (generation of images and other media types are not supported) that is designed for large context, deep tool use, and can be used to handle complex workflows. This addition reflects our ongoing commitment to give you more flexibility when designing and optimizing agents—so you can choose the right model for every business scenario.

Expanding our model line-up

Copilot Studio aims to give makers the ability to evaluate and use the model best suited to transform their business. With the addition of xAI Grok 4.1 Fast, we’re building on that commitment.

Alongside OpenAI and Anthropic models, xAI adds even more depth to your multi‑model lineup—while still keeping responsible AI principles at the center. Before rollout, every model in your Copilot Studio lineup goes through security, safety, and quality evaluations.

When using Grok 4.1 Fast in Copilot Studio, customer data is not retained or used to train xAI’s models. xAI’s models are hosted outside Microsoft-managed environments, and when you use Grok 4.1 Fast in Copilot Studio, your relationship with xAI will be independent of Microsoft and governed by xAI’s Enterprise Terms of Service and Data Protection Addendum.

Unlocking the power of model choice

Starting today, Grok 4.1 Fast is available in preview in early access environments, and is off by default. Your organization’s admin must explicitly opt in to use the model before US-based makers can build with it.

If an admin doesn’t opt in, nothing changes and makers keep their current model options. Existing agents continue running exactly as they do today.

Learn more about admin opt-in controls:

The post More choice, more flexibility: xAI Grok 4.1 Fast now available in Microsoft Copilot Studio appeared first on Microsoft Copilot Blog.

]]>
6 core capabilities to scale agent adoption in 2026 http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/6-core-capabilities-to-scale-agent-adoption-in-2026/ Mon, 26 Jan 2026 17:00:00 +0000 Learn six core capabilities organizations need to support agent adoption at scale in 2026, from governance and security to empowerment and operations.

The post 6 core capabilities to scale agent adoption in 2026 appeared first on Microsoft Copilot Blog.

]]>
Before 2025, most AI agents were still experimental: narrow in scope, manually triggered, and siloed to individuals or teams. Over the past 12 months, that’s changed dramatically. Organizations have moved from exploring AI to expecting measurable impact from their agents.

This shift marks the moment AI moved from helping people do work faster to helping organizations optimize their workflows.

Microsoft Copilot Studio has played a central role in this transition. It gives you more flexibility to evaluate and use the models best suited to your business as agent adoption scales.

In 2025, we laid the groundwork for what scalable, impactful agentic work should look like. In 2026, we believe the organizations that benefit most will be the ones that build on that foundation. These six trends define what organizations need to make agent adoption stick in 2026 and beyond:

  1. Ability for anyone to turn intent into agents
  2. Agents that can own workflows from end to end
  3. Power to coordinate agents for real outcomes
  4. Flexibility to control your agent models
  5. Agents that can act across your systems
  6. Capability to scale agents without sacrificing control

Organizations that have all six aren’t just experimenting with agents. They’re operationalizing them, turning curiosity into confidence, and transmuting innovation into sustained business value.

1. Ability for anyone to turn intent into agents

Historically, building an agent meant translating business intent into technical instructions. This process slowed adoption and limited who could participate. In 2025, that barrier fell away. Conversation became the agent-making interface in both Copilot Studio and the Agent Builder in Microsoft 365 Copilot Chat. Now, people can describe what they want done using natural language and create an agent to do it. These agents can interpret intent, context, and goals thanks to their underlying model and knowledge, not specially built code.

That shift is designed to empower everyone on your team to build agents. Sales leaders, operations managers, and human resource (HR) officials no longer need to wait for technical assistance to automate everyday work. Meanwhile, IT teams retain clarity and structure under the hood, with agents grounded in logic that can be reviewed, refined, and governed—all in Copilot Studio.

The results? Faster fast agent creation, broader participation, and fewer translation gaps between business needs and technical execution.

For example, a sales operations manager can now describe and publish an agent that:

  • Monitors pipeline changes, such as changed estimated close dates.
  • Flags deals that may be at risk, based on predefined criteria (e.g., no activity with stakeholders for over a month).
  • Notifies account owners with recommended next steps based on the type of flag.

The payoff: More people can build knowledgeable, context-aware, and helpful agents, which can translate to less bottlenecking on centralized teams and faster time to value.

2. Agents that can own workflows from end to end

For many teams, early adoption wins came from AI assistance: drafting content, summarizing meetings, answering questions. Useful, but incremental. In 2025, agents crossed an important threshold; they evolved from helping with work to handling it on your behalf. With agent flows and the Workflows Agent, agents can now own repeatable processes from end to end, automatically advancing work when required.

In other words, agents unlock new opportunities to streamline and scale how work gets done. An onboarding process no longer stalls due to a missed handoff. A request doesn’t linger in a queue waiting for manual follow-up. Agents move work along reliably with automated approvals, escalating to humans only when judgment is required. For leaders, that can mean faster cycle times and fewer hidden bottlenecks. For teams, it can translate to more time spent on decisions—not coordination.

For example, a company could use Copilot Studio to automate a multi-step process for expense submission, validation, and reimbursement. The process:

  • Triggers when an employee submits a wellness or reimbursement request.
  • Guides the employee through required forms and documentation in a single, user-friendly flow.
  • Validates submissions against global wellness policy rules and regional guidelines.
  • Routes requests across the appropriate software as a service (SaaS) tools and internal HR systems.
  • Escalates exceptions to a human only when needed.

The payoff: Faster resolutions using consistent criteria, less potential for human error, and a daily pain point made smoother with an agent.

3. Power to coordinate agents for real outcomes

Often, meaningful business outcomes don’t happen in a single step or system. As soon as agents move beyond simple tasks, coordination becomes increasingly challenging. Multi-agent systems addressed this complexity head-on in 2025, allowing agents to specialize, delegate, and collaborate toward shared goals.

Instead of designing one agent to handle every step, organizations can now compose agents that mirror how teams already work. One agent might monitor signals, while another gathers or validates information, and a third prepares recommendations or takes action.

Together, these agents are designed deliver outcomes that would be difficult for any single agent to manage alone. More importantly, they remove a layer of decision-making from the stakeholder. Instead of figuring out which system or agent holds the right answer, you can simply ask your question and let the agentic system coordinate the rest. Complex workflows become easier to reason about, evolve, and scale—without adding mental overhead for the people involved.

For example, a manufacturing company might use:

  • One agent grounded in internal policy and safety documentation.
  • Another agent trained on equipment manuals and training materials.
  • A third agent connected to supplier-provided expertise.
  • A coordinating agent that evaluates each question and routes it to the right source automatically.

The payoff: More clarity around which system or agent to use—just ask, and the right expertise can come together behind the scenes. This can help keep complex work cohesive, not cobbled together.

4. Flexibility to control your agent models

As agents moved into real business workflows, one reality became clear: not every task has the same requirements or permissions. Some scenarios call for deeper reasoning. Others prioritize repeatability and efficiency at scale. Still, others must meet strict regulatory, security, or data residency standards.

In 2025, Copilot Studio expanded model choice to meet those needs. It now supports Anthropic models, chat and reasoning-specific models, access to thousands of models through Microsoft Foundry, and bring-your-own-model options. You can select the right model for each workload while IT teams maintain policy alignment and oversight. This gives your organization flexibility in how agents behave and perform, without fragmenting the experience.

For example, an organization in a regulated field might use:

  • One model optimized for policy interpretation and complex reasoning.
  • Another tuned for cost efficiency in high-volume, repeatable requests.
  • Central governance to ensure each model is applied appropriately.

The payoff: Instead of compromising between performance and compliance, agents can be configured to match the realities of the work they support—and evolve as those requirements change.

5. Agents that can act across your systems

For years, AI has been good at suggesting what people should do, but it hasn’t been equipped to help make it happen. In 2025, capabilities like Model Context Protocol (MCP) and computer use began to close that gap. Agents can now connect to systems, navigate interfaces, and take action across tools—not just give recommendations.

This addresses one of the biggest gaps in early AI adoption by reducing the handoffs that drastically slow work. When agents can act across environments to update records, trigger workflows, and interact with real systems (like clicking around a website and filling out form fields), work moves forward automatically, at any time of day. This can help reduce delays, manual errors, and the risk that important follow-ups get lost between tools or teams.

For example, an operations agent could autonomously:

  • Identify a supply issue based on predefined signals.
  • Update the system of record with the latest status.
  • Fill out and file a ticket to initiate remediation.
  • Notify relevant stakeholders with context and next steps.

The payoff: Faster response times, fewer handoffs, and agents that operate across real-world systems, not just chat windows.

6. Capability to scale agents without sacrificing control

Widespread agent adoption raises a familiar concern: How do you prevent innovation from outpacing governance? Leaders want to move quickly, but not at the expense of visibility, security, or cost control. In 2025, Copilot Studio addressed that gap by bringing lifecycle management, agent evaluations, and enterprise controls directly into the agent experience.

Organizations can now understand which agents are in use, how they’re performing, and what they cost across environments. Admin controls are designed to align agent behavior with intended use, while agent evaluations support ongoing quality and improvement. Paired with Microsoft Agent 365, organizations get a unified view of agents across Microsoft 365 Copilot and Copilot Studio, giving business and IT leaders the clarity needed to scale with confidence.

For example, IT leaders can:

  • See which agents are used, by whom, and at what cost.
  • Evaluate agent quality and performance over time.
  • Communicate performance insights to business leaders to help increase buy-in, investment, and adoption.
  • Apply consistent governance without slowing innovation.

The payoff: Agents can move from pilots to production faster, with fewer surprises and clearer business impact.

How to turn agentic momentum into results

The question for 2026 isn’t whether agents will be used—it’s how deliberately they’ll be put to work. Over the past year, the foundations for scalable agent adoption came together. The opportunity now is to move from experimentation to widespread execution.

We believe organizations that’ll get the most value in the year ahead will do three things consistently:

  1. Broaden who builds by empowering business teams to create and refine agents in partnership with IT teams, who provide guardrails without stifling creativity.
  2. Standardize how agents are shared and reused, so successful patterns move beyond individual productivity into team and enterprise workflows.
  3. Measure what matters as a matter of course, using visibility into usage, quality, and cost to guide where agents are expanded, improved, or retired.

When business and IT teams operate from the same foundation, agents stop being side projects and start becoming part of how work happens. That’s how teams move faster, reduce rework, and work together with AI and automation to create true business transformation.

Where to start—and how to go further

Your best agentic year isn’t defined by how many agents you build, but by how many people rely on them to get work done. Copilot Studio gives you the foundation to do exactly that. Now, 2026 is about building out, driving adoption, and scaling up.

Try this three-step plan for building and scaling your agent strategy with Copilot Studio:

  1. Get quick wins. Start by focusing on business-to-employee (B2E) assistive agents. Try downloading the Employee Self-Service Agent from the Agent Store.
  2. Create a Center of Excellence (COE). Set up a central team that can help triage cross-team needs and get the broader organization comfortable with agents. This could be a representative from every department, or made up of agent champions (regardless of where they sit in their org). A great COE can help reduce geographic silos and bring consistency to an AI strategy.
  3. Measure and reward adoption. What gets measured gets focus and investment. Compare the situation today with the situation post-agent adoption. Did the agent provide value? Has it improved what you set out to change? Prove the progress, and then you can move onto the next process.

Get started today and turn agent curiosity into capability, confidence, and commitment this year.

The post 6 core capabilities to scale agent adoption in 2026 appeared first on Microsoft Copilot Blog.

]]>
Copilot Studio extension for Visual Studio Code Is now generally available http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/copilot-studio-extension-for-visual-studio-code-is-now-generally-available/ Wed, 14 Jan 2026 16:00:00 +0000 The Microsoft Copilot Studio extension for Visual Studio Code is generally available, so you can build and manage Copilot Studio agents from the IDE you already use.

The post Copilot Studio extension for Visual Studio Code Is now generally available appeared first on Microsoft Copilot Blog.

]]>
If you build agents with the Copilot Studio extension for Visual Studio Code, you already know the fastest way to iterate is to treat your agent like software: version it, review changes, and promote it through environments with confidence. Today, the Microsoft Copilot Studio extension for Visual Studio Code is generally available, so you can build and manage Copilot Studio agents from the IDE you already use.

What you can do with the Copilot Studio extension for Visual Studio Code

As agents grow beyond a few topics and prompts, teams need the same development hygiene they use for apps: source control, pull requests, change history, and repeatable deployments. The VS Code extension brings that workflow to Copilot Studio so makers and developers can collaborate without losing governance or velocity.

The extension supports a simple loop that fits naturally into your SDLC:

1) Clone an agent to your local workspace

Pull the full agent definition from Copilot Studio into a folder on your machine, so you can work locally with the full context of your agent.

2) Edit confidently in VS Code

Make changes to your agent components (topics, tools, triggers, settings, knowledge references) using a structured agent definition format and your existing VS Code workflow. The extension also provides IDE help like syntax highlighting and IntelliSense-style completion so edits are faster and less error-prone.

3) Review changes before they land

Preview what changed, compare cloud vs local, and resolve conflicts before you apply updates. This helps teams avoid overwriting each other’s work and makes collaboration practical at scale.

4) Apply changes back to Copilot Studio

Sync your updates to the cloud to test behavior and create evals as part of your normal iteration loop.

5) Deploy with the processes your team already uses

Use standard Git workflows and integrate agent definitions into automated deployment processes. This is the missing piece for teams that want agents to move through environments with the same rigor as code.

Built for development teams

The extension is designed for the way engineering teams actually work:

  • Standard Git integration for versioning and collaboration
  • Pull request-based reviews so changes are discussed and approved
  • Auditability over time, with a clear history of modifications
  • VS Code ergonomics: keyboard shortcuts, search, navigation, and a local dev loop

This extension is especially helpful if you:

  • Manage complex agents with many topics and tools and need fast search and navigation
  • Collaborate with multiple people and need PR workflows for safe changes
  • Want agent definitions in source control and environment sync through DevOps pipelines
  • Prefer building with your IDE plus an AI assistant for faster iteration

Develop Copilot Studio Agents using GitHub Copilot  

The Copilot Studio extension for Visual Studio Code lets you build and refine your Copilot Studio agent with AI help in the same place you write code. Use GitHub Copilot, Claude Code, or any VS Code AI assistant to draft new topics, update tools, and quickly fix issues in your agent definition, then sync changes back to Copilot Studio to test and iterate. The result is a faster inner loop with fewer context switches and a workflow that fits how development teams already work.

Get started

  1. Install the extension from the Visual Studio Marketplace
  2. Clone your first agent from Copilot Studio
  3. Make a small change locally
  4. Use Apply Changes to sync back to Copilot Studio and test

Learn more and share feedback

We built this extension so agent development can feel like the way software teams already work: in your editor, with source control, and with AI help when you want it. Try it in your next agent update and let us know what you want to see next!

The post Copilot Studio extension for Visual Studio Code Is now generally available appeared first on Microsoft Copilot Blog.

]]>
Anthropic joins the multi-model lineup in Microsoft Copilot Studio http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/anthropic-joins-the-multi-model-lineup-in-microsoft-copilot-studio/ Wed, 24 Sep 2025 15:00:00 +0000 Anthropic models are now available in Copilot Studio, providing more flexibility to design smarter agents, speed up workflows, and improve outcomes.

The post Anthropic joins the multi-model lineup in Microsoft Copilot Studio appeared first on Microsoft Copilot Blog.

]]>
Starting today, Anthropic models are rolling out alongside OpenAI models in Microsoft Copilot Studio. With the choice of Anthropic and OpenAI models for orchestration, chat, and deep reasoning scenarios in Copilot Studio, you have greater flexibility in how you design and optimize agents and workflows to transform business processes.

Anthropic models are available to customers in early release cycle environments worldwide today. They will roll out to preview in all environments within two weeks, and be ready for use in production by the end of the year.

Unlocking the power of choice

Copilot Studio will continue to use OpenAI as the default model for new agents, and now you’ll also have the flexibility to choose from Anthropic models, Claude Sonnet 4 and Claude Opus 4.1.  You can use these models in Copilot Studio in two ways:

  • Orchestration: Build, orchestrate, and manage agents powered by Anthropic models for advanced reasoning, workflow automation, flexible agentic tasks, and tool use.
  • Prompt builder: The prompt builder drop-down menu makes it simple to choose your optimal model for each scenario.

With more options and control over model selection, we empower you to choose the best model for your use case – no matter the industry, function, or process.

How to get started with Anthropic models in Copilot Studio

Admins control how Anthropic models are used within their environment, with clear options for enabling, managing, and restricting access.

  • Opt-in at launch: Anthropic models must first be enabled by your admin in the Microsoft 365 Admin Center (MAC) before they can be used across your tenant.
  • Environment management: Once enabled in MAC, Anthropic will be on by default in the Power Platform Admin Center (PPAC). There, admins have additional controls to manage how Anthropic models are accessed by makers in Copilot Studio environments.
  • Automatic fallback: If Anthropic models are disabled, agents built with it will automatically switch to the default model, OpenAI GPT-4o. No additional configuration required.

Step by step: Building an HR onboarding agent in Copilot Studio

Animated Gif Image

Let’s say you’re an HR professional building an agent to automate employee onboarding. Here’s how you’d get started:

  1. Create a new agent: Open Copilot Studio and begin creating your HR onboarding agent.
  2. Connect knowledge sources: Link relevant HR documents, FAQs, and policy resources to your agent.
  3. Design prompts and flows: Set up conversational logic and workflows your agent will use to interact with new hires.
  4. Select your model: In the settings panel, you’ll find the option to select your agent’s primary model for reasoning and responding. You can choose from available options, including OpenAI GPT-5 or the newly added Claude Sonnet 4 and Claude Opus 4.1 models.
  1. Configure AI tools: Model choice extends to custom prompts as well, allowing you to tailor your agent’s behavior to the strengths of each model for different tasks. For example, you might select one model for conversational employee FAQs and communications but use a different model for compliance checks and policy interpretation.
  2. Deploy and test: Launch your agent, test onboarding scenarios, and iterate based on feedback.

When it comes to determining the right model for the right job, the answer depends on your unique goals. When building agents, there is no universal solution. The right model is the one that fits your use case. And with multi-agent orchestration in Copilot Studio, you can even coordinate multiple agents with different primary models, and they will work seamlessly together.

Experiment, share, and shape what’s next

We’re just getting started with model choice in Copilot Studio. Whether you’re using OpenAI, Anthropic, or both, you have the flexibility to choose the right model for your agent. We encourage you to explore, test different models for your use cases, and share your feedback. Your insights help us improve every day and shape what’s next for agentic AI at Microsoft.

Looking for more information to help you get started?

The post Anthropic joins the multi-model lineup in Microsoft Copilot Studio appeared first on Microsoft Copilot Blog.

]]>
Computer use is now in public preview in Microsoft Copilot Studio http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/computer-use-is-now-in-public-preview-in-microsoft-copilot-studio/ Mon, 15 Sep 2025 16:39:40 +0000 Computer use is now available in public preview in US-based environments, expanding how you can design agents that work across websites and apps.

The post Computer use is now in public preview in Microsoft Copilot Studio appeared first on Microsoft Copilot Blog.

]]>
Computer use, now in public preview, gives your Microsoft Copilot Studio agents the ability to work with websites and desktop applications. Want an agent to extract data from a dashboard or fill out forms in an app that doesn’t have an API? With computer use, you simply describe the task, and your agent uses its own computer to complete it. Built-in reasoning lets it understand and adapt in real time to changes in apps and websites.

Starting today, customers with U.S.-based environments can add computer use to their agents directly as a tool.

A screenshot of an agent filling out an invoice using the computer use tool

New enhancements to computer use

With this public preview release, we’re introducing several enhancements:

  • Hosted browser (powered by Windows 365) – Start automating web tasks right away, no machine setup required. Need to use custom desktop applications or access internal sites? You can always register your own machine.
  • Getting started templates – Quickly explore pre-built templates for common scenarios to spark inspiration.
  • Credentials – Securely store and use login credentials for websites and desktop applications that computer use needs to access.
  • Allow-list – Define exactly which websites and applications computer use is permitted to operate on. If there is an attempt to go outside this list, the run will automatically stop.

Unlock new automation scenarios

With computer use, you can enable your agents to perform even more tasks, such as:

  • Market research and data gathering – Build agents that can navigate across different websites, filter and read through dashboards, and collect the insights you need for analysis.
  • Inventory tracking – Let agents check supplier portals and e-commerce sites to monitor product availability, delivery estimates, or stock levels.
  • Automated data entry – Many processes require transferring information between systems that don’t expose APIs. Now your agents can navigate to those forms and enter the data for you.

Already using desktop flows in Microsoft Power Automate?

If you already use machines to run desktop flows, you can also use them for computer use in Copilot Studio. This allows you to build UI automations that go beyond the traditional limitations of robotic process automation (RPA), handling complex and dynamic interfaces with ease.

Computer use is especially valuable when:

  • UIs shift frequently – Apps and websites are dynamic by nature, with layouts that can vary across versions.
  • It’s easy to get started – Simply describe what you want in natural language, no coding required. You can test and refine with a side-by-side view of the computer and the reasoning chain.
  • Vision matters – The task depends on what’s visible on screen, such as interpreting charts and images.

Get started

You can try the public preview of computer use today in Copilot Studio:

  1. In any US-based environment, create or open an existing agent.
  2. Go to Tools → Add tool → New tool.
  3. Select computer use and start building by simply describing the task you’d like done.
A screenshot showing how to choose computer use in the Tools tab in Copilot Studio

We can’t wait to see what you’ll create with computer use! To learn more, check out the documentation. If you have questions or feedback, we’d love to hear from you at computeruse-feedback@microsoft.com.

The post Computer use is now in public preview in Microsoft Copilot Studio appeared first on Microsoft Copilot Blog.

]]>
Announcing new computer use in Microsoft Copilot Studio for UI automation http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/announcing-computer-use-microsoft-copilot-studio-ui-automation/ Tue, 15 Apr 2025 20:32:26 +0000 Announcing computer use in Copilot Studio! This new feature allows your Copilot Studio agents to interact directly with websites and desktop applications. Want to join the limited preview? Read on.

The post Announcing new computer use in Microsoft Copilot Studio for UI automation appeared first on Microsoft Copilot Blog.

]]>
AI innovation is accelerating at an unprecedented pace, and Microsoft Copilot Studio is at the forefront—integrating the best AI advancements into a platform built to solve business challenges at scale. Last month, we introduced deep reasoning capabilities for agents, support for model context protocol (MCP), and the general availability of agent flows in Copilot Studio.

Today, we are excited to announce that computer use is coming to Copilot Studio through an early access research preview. This new capability allows your Copilot Studio agents to treat websites and desktop applications as tools. With computer use, agents can now interact with any system that has a graphical user interface!

A screenshot of Copilot Studio, now showing an icon for the computer use feature preview

Achieve new efficiencies with computer use 

Computer use enables agents to interact with websites and desktop apps by clicking buttons, selecting menus, and typing into fields on the screen. This allows agents to handle tasks even when there is no API available to connect to the system directly. If a person can use the app, the agent can too.

Computer use adapts to changes in apps and websites automatically. It adjusts in real time using built-in reasoning to fix issues on its own, so work continues without interruption. It is also built on Copilot Studio’s robust security measures and governance frameworks, to help ensure compliance with organizational and industry standards.

With computer use in Copilot Studio, makers can build agents that automate tasks on user interfaces across both desktop and browser applications, including Edge, Chrome, and Firefox. Additionally, computer use runs on Microsoft-hosted infrastructure, meaning organizations don’t need to manage their own servers. Enterprise data stays within Microsoft Cloud boundaries and is not used to train the Frontier model. This helps your organization accelerate deployment, reduce maintenance, and lower infrastructure costs.

Unlock new value with agentic and automation scenarios

To bring this technology to life, consider the following high value use cases:

  • Automated data entry: Imagine a scenario where an enterprise needs to input large volumes of data from various sources into a centralized system. Computer use can automate this process, reducing manual effort and minimizing errors.
  • Market research: Marketing teams can leverage the tool to automate the collection of market data from various online sources for analysis, providing valuable insights without the need for manual intervention.
  • Invoice processing: For finance departments, the tool can automate the extraction of data from invoices and input it into accounting systems, streamlining the entire invoicing process and reducing manual errors.
A screenshot of computer use in Copilot Studio in action, adding a new invoice to a dashboard automatically

Reimagining robotic process automation (RPA)

Computer use agents are transforming robotic process automation (RPA). They overcome traditional limitations, like the fragility of UI elements, and can handle complex dynamic interfaces. This makes automation accessible to people beyond professional RPA developers.

In Copilot Studio, computer use addresses common RPA challenges by making automation smarter and more intuitive:

  • It responds to changes in real time: When buttons or screens change, the tool keeps working without breaking your flow.
  • It is easy to use: You can describe what you want in natural language, no coding needed, and test and refine the prompt with real-time side-by-side video of the computer use reasoning chain and the planned UI automation.
  • It is built with intelligence: The agent sees what is on the screen and makes smart decisions in real time, even in complex or constantly changing environments.
  • It comes with full visibility: Makers can view a history of computer use activity at will, including captured screenshots and reasoning steps.

The future of innovation with Copilot Studio

Copilot Studio is the end-to-end agent platform designed to help organizations achieve their AI and operational goals. We want to empower you to streamline processes, enhance productivity, and drive innovation.

If you are interested in exploring the new computer use capability, we would love to hear from you! Please fill out this form to let us know you would like to participate.

We will also share more about this new announcement at Microsoft Build in May 2025—register here to join us

The post Announcing new computer use in Microsoft Copilot Studio for UI automation appeared first on Microsoft Copilot Blog.

]]>
Introducing Model Context Protocol (MCP) in Copilot Studio: Simplified integration with AI apps and agents http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/introducing-model-context-protocol-mcp-in-copilot-studio-simplified-integration-with-ai-apps-and-agents/ Wed, 19 Mar 2025 17:59:59 +0000 Model Context Protocol (MCP) is now available in Copilot Studio, bringing external data and APIs into agents to work across your workflows.

The post Introducing Model Context Protocol (MCP) in Copilot Studio: Simplified integration with AI apps and agents appeared first on Microsoft Copilot Blog.

]]>
At Microsoft, we believe in creating tools that empower you to work smarter and more efficiently. That’s why we’re thrilled to announce the first release of Model Context Protocol (MCP) support in Microsoft Copilot Studio. With MCP, you can easily add AI apps and agents into Copilot Studio with just a few clicks.

What’s new: Model Context Protocol (MCP) integration in Copilot Studio

Model Context Protocol enables makers to connect to existing knowledge servers and APIs directly from Copilot Studio. When connecting to an MCP server, actions and knowledge are automatically added to the agent and updated as functionality evolves. This simplifies the process of building agents and reduces time spent maintaining the agents.

MCP servers are made available to Copilot Studio using connector infrastructure. This means they can employ enterprise security and governance controls such as Virtual Network integration, Data Loss Prevention controls, multiple authentication methods—all of which are available in this release—while supporting real-time data access for AI-powered agents.

MCP enables our customers to:

  • Easily connect to data sources: Whether you have a custom internal API or external data providers, the MCP protocol enables smooth and reliable integration into Copilot Studio.
  • Access the marketplace of existing servers: In addition to custom connectors and integrations, users can now tap into a growing library of pre-built, MCP-enabled connectors available in the marketplace. This capability gives you more ways to connect with other tools and makes using them faster and easier.
  • Flexible action capabilities: MCP servers can dynamically provide tools and data to agents. This enables greater flexibility while reducing maintenance and integration costs.

You can start today by extending an agent with MCP or building a custom connector using existing SDKs.

To get started, access your agent in Copilot Studio, select ‘Add an action,’ and search for your MCP server! (Note: generative orchestration must be enabled to use MCP.)

Each tool published by the MCP server is automatically added as an action in Copilot Studio and inherits the name, description, inputs, and outputs. As tools are updated or removed on the MCP server, Copilot Studio automatically reflects these changes, ensuring users always have the latest versions and that obsolete tools are removed. A single MCP server can integrate and manage multiple tools, each accessible as an action within Copilot Studio. This streamlined process not only reduces manual effort, it means less risk of errors from outdated tools.

This offering additionally includes Software Development Kit (SDK) support, enabling further customization and flexibility for your integrations. To create your own Model Context Protocol server, the process can be broken down into three key steps:

  1. Create the server: The first step in integrating Copilot Studio with the MCP is to create a server via one of the SDKs that will serve as the foundation for handling your data, models, and interactions. You can tailor the server to your specific needs, such as enabling it to handle custom model types and data formats or to support specific workflows. 
  2. Publish through a connector: Once the server is in place, the next step involves creating a custom connector that links your Copilot Studio environment to the model or data source.
  3. Consume the data via Copilot Studio: Finally, once the server is set up and the connector is defined, you can begin consuming the data and interacting with the models via Copilot Studio.

By following these three steps, you create a streamlined, adaptable integration with Copilot Studio that not only connects systems but also enhances your ability to maintain and scale that integration according to your needs.

We support Server-Sent Events (SSE) as the transport mechanism; this feature is currently in environments in preview regions and will be available across all environments shortly.

Learn more about these new capabilities here: Extend your agent with Model Context Protocol (preview) – Microsoft Copilot Studio | Microsoft Learn.

What’s next?

We’re excited about the potential of Model Context Protocol and its ability to transform the way users interact with Copilot Studio. But this is just the beginning. Our team is actively working on additional features and improvements to further enhance the integration experience. Stay tuned for more updates, as we plan to introduce even more ways to connect your data and tools effortlessly into Copilot Studio.

We look forward to your feedback and learning more on how this new capability enhances your experience and helps you unlock the full power of Copilot Studio.

The post Introducing Model Context Protocol (MCP) in Copilot Studio: Simplified integration with AI apps and agents appeared first on Microsoft Copilot Blog.

]]>