Developer tools Archives | Microsoft Copilot Blog http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/cs-topic/developer-tools/ Mon, 06 Apr 2026 22:28:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 New and improved: Multi-agent orchestration, connected experiences, and faster prompt iteration http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/new-and-improved-multi-agent-orchestration-connected-experiences-and-faster-prompt-iteration/ Wed, 01 Apr 2026 16:00:00 +0000 Learn what's new in Copilot Studio: Multi-agent systems are now generally available, plus recent updates to the Prompt Editor and governance controls.

The post New and improved: Multi-agent orchestration, connected experiences, and faster prompt iteration appeared first on Microsoft Copilot Blog.

]]>
Microsoft Copilot Studio helps organizations move beyond isolated AI experiences and build connected systems of agents that can scale, adapt, and deliver real business value. Recent enhancements focus on making it easier for agents to work together across tools and data sources, while giving makers more control over how those agents behave in production.

What you’ll see this month: New generally available capabilities for multi-agent coordination across Microsoft Fabric, the Microsoft 365 Agents SDK, and open Agent-to-Agent (A2A) protocols—all of which help agents collaborate across your ecosystem and perform more valuable work. Plus, you’ll find updates to prompt authoring, model choice, and governance controls that can help make it faster to build and refine high-quality agent experiences with confidence.

Agents that work together across your entire ecosystem

The challenge in scaling AI inside an organization isn’t creating a useful agent. It’s about getting many agents—across teams and tools—to work together in a way that’s reliable and repeatable.

In many organizations, data teams might build one kind of agent, app teams another, and productivity teams yet another. Each agent can be valuable on its own, but once a workflow needs knowledge from one system, reasoning from another, and action in a third—teams often run into brittle handoffs and custom integration work. This slows agent adoption and makes it harder to move from promising pilots to real business impact.

This month, Copilot Studio takes a meaningful step forward: several multi-agent capabilities are rolling out to general availability over the next few weeks, giving your teams new ways to connect and orchestrate agents across your ecosystem. These updates include Microsoft Fabric integration, Microsoft 365 Agents SDK orchestration, and Agent-to-Agent (A2A) communication—all designed to help your agents operate together as a coordinated system rather than in isolated silos.

Multi-agent support for Microsoft Fabric

With multi-agent support, your Copilot Studio agents can work with Fabric agents to reason over enterprise data and analytics at scale. That means you can connect business-facing agent experiences more directly to the data estate they already rely on, without treating every data-intensive scenario like a one-off engineering project. Instead of working with limited or disconnected data, these agents will be able to operate with full business context—helping make their outputs more accurate, relevant, and actionable.

Multi-agent support for the Microsoft 365 Agents SDK

Using the Microsoft 365 Agents SDK, teams can now orchestrate Copilot Studio agents alongside agents built for Microsoft 365 experiences. Instead of recreating the same logic across multiple agents (think retrieving data, applying business rules, or completing common tasks), you’ll be able to reuse and combine existing capabilities. This makes it easier to compose cross-app workflows from what’s already been built, reducing duplication and keeping experiences more efficient and consistent.

Agent-to-Agent (A2A) support

With A2A support, Copilot Studio agents can directly communicate with and delegate work to other agents—first-party, second-party, or third-party—using an open protocol that allows universal access. This matters because the future of enterprise AI will not belong to a single stack. Organizations need to build agents on platforms that can participate in a broader ecosystem, not just operate within one product boundary. Copilot Studio A2A provides that interoperability and power.

The impact of multi-agent systems

We’ve already seen the power of this approach with the Ask Microsoft web agent, one of our early “customer zero” implementations. As site traffic and knowledge sources grew, the single-agent architecture began to strain, creating slower response times. Using Copilot Studio, the team upgraded the agent to a modern architecture with generative orchestration and multi-agent coordination.

Now, multiple sub-agents handle different parts of the site—Microsoft Azure, Microsoft 365, pricing, trials, and more—while the main agent orchestrates them to provide fast, coherent, multi-turn responses. This setup allows Ask Microsoft to answer complex questions involving multiple products or services, and to tailor responses based on where the customer is on the site.

Building a more advanced assistant with Copilot Studio has meaningfully raised the bar for our customer experience and enabled us to scale faster across products to deliver real business impact

Alyse Muttera, Director of eCommerce Programs at Microsoft

To show how this approach works in other organizations, consider a common scenario at a bank. The loan department has one agent handling mortgage applications, while the banking department runs a separate agent for account inquiries. A customer, however, expects a single seamless experience.

Multi-agent orchestration lets each specialized agent manage its area of expertise while coordinating responses behind the scenes. For instance, if a customer asks about a mortgage payment and their account balance in the same interaction, the system delivers a cohesive, context-aware answer that combines insights from both agents—no juggling multiple interfaces required.

When specialized agents work together behind the scenes, customers can get a unified experience and employees can get time back.

That’s exactly the kind of impact Coca‑Cola Beverages Africa is realizing today by using Copilot Studio agents and Microsoft Dynamics 365 to autonomously run planning cycles and automate workflows end to end, saving planners 1 to 1.5 hours every day.

These features will be fully available to all eligible customers as of April 2026. Three capabilities, one outcome: agents that can operate more like a system and less like a collection of disconnected point solutions.

Build prompts faster while maintaining control

As agent experiences grow more sophisticated, the quality of the prompt an agent maker uses matters more. A great prompt yields more powerful results from agents than a good prompt, and fine-tuning prompts is key to unlocking them.

But in practice, prompt iteration has historically felt disjointed and slow. Makers previously balanced their flow of work with jumping into a separate editor, making a small change, testing it, and then repeating the process again. That friction can add up quickly, especially when teams are tuning prompts for specialized business scenarios.

The new immersive Prompt Builder, now generally available, helps reduce that friction by bringing prompt editing directly into each agent’s Tools tab. You can update instructions, switch models, add inputs or knowledge, and test changes—all in one place. Instead of breaking context every time you want to refine an agent’s behavior, you can iterate while staying grounded in the agent you’re building.

This matters most in real-world scenarios where prompt behavior is tied to domain knowledge and policy nuance. For example, a team building an agent to support clinical documentation might need to refine instructions, swap in a better knowledge source, and test outputs against terminology that is common in healthcare but more likely to trigger default safeguards. Doing that from one workspace can make iteration faster and help lower the effort required to get a production-ready result.

More options for prompts: Content moderation and model choice

Speaking of triggering default safeguards, Copilot Studio has also added content moderation settings for prompts, now generally available in supported regions. This gives makers more control over harmful content sensitivity on managed models, including turning down that sensitivity to help unblock legitimate scenarios in industries like healthcare, insurance, and law enforcement, where default settings may be overly restrictive for the content being processed.

For even more control over prompts, the Prompt Tool now supports Anthropic Claude Opus 4.6 and Claude Sonnet 4.5 in paid experimental preview in the United States. That gives makers more choice in matching the right model to the right prompt, rather than forcing every scenario into the same tradeoff profile. This feature is great for teams that want more flexibility in how they balance performance, reasoning depth, and cost.

All together, these improvements help teams move faster on prompt iteration while maintaining the control and flexibility required in production scenarios.

What else is new and improved in Copilot Studio

We have also recently released several additional updates across automation, meetings, retrieval quality, and model support.

  • ServiceNow and Azure DevOps connector quality improvements are now generally available. These help agents better understand operational questions, retrieve the right ticket or work item data, and return more complete, actionable answers automatically.
  • Evaluation automation APIs are now generally available through Microsoft Power Platform APIs and connectors. These APIs help make it easier to run evaluations programmatically and integrate quality checks into continuous integration and continuous delivery (CI/CD) workflows.
  • Agents for Microsoft Teams meetings can now access real-time meeting transcripts and group chat. This supports scenarios like answering questions during the meeting, surfacing relevant information, or helping track decisions and follow-ups as they happen.
  • Model context protocol (MCP) apps and Apps SDK support have expanded how agents connect to your external work apps, helping to make it easier to integrate business systems and enable agents to take action across your broader ecosystem—not just respond with information.
  • Additional model support, including Grok 4.1 Fast, GPT-5.3 Thinking, and GPT-5.4 Instant in paid experimental preview, gives makers more options as they tune experiences for speed, cost, and capability.

Overall, these updates reflect a continuing broader shift in Copilot Studio: moving from building individual AI experiences to building connected, governed systems that can fit more naturally into how work already happens. As you scale up your organization’s use of multi-agent ecosystems, these will help your teams reach further across channels and knowledge sources to more accurately fulfill your business needs.

Stay up to date on all things Copilot Studio

More is coming in April 2026 across voice channels, workflows, and the building experience. Check out all the updates as we ship them, as well as new features releasing in the next few months here: What’s new in Microsoft Copilot Studio.

To learn more about Microsoft Copilot Studio and how it can transform productivity within your organization, visit the Copilot Studio website or sign up for our free trial today.

The post New and improved: Multi-agent orchestration, connected experiences, and faster prompt iteration appeared first on Microsoft Copilot Blog.

]]>
Enable agents to bring apps into the flow of work—while keeping IT in control http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/enable-agents-to-bring-apps-into-the-flow-of-work-while-keeping-it-in-control/ Mon, 09 Mar 2026 13:00:00 +0000 Stop switching tabs: agents now let you act inside approved apps from chat in Copilot, with controls that help IT teams manage risk and usage.

The post Enable agents to bring apps into the flow of work—while keeping IT in control appeared first on Microsoft Copilot Blog.

]]>
A seller needs to log a new opportunity. A manager wants to approve a request. A marketer has to update a campaign asset. Until today, these actions often meant taking insights from Microsoft 365 Copilot and switching tabs. Agents can now change that: helping people take action in their go-to work apps, without needing to leave chat in Copilot.

But enabling this kind of capability raises real questions for IT: What risks do these agents introduce? Are they actually being used? And are they behaving as expected?

The more agents you launch and the more powerful these agents are, the more these answers matter. That’s why we’re introducing three new capabilities across Copilot and Microsoft Copilot Studio that help people move work forward faster—while keeping IT firmly in control:

  1. Enhanced agents that bring apps directly into chat in Copilot
  2. New ways for employees to find the right agent, fast
  3. Tools to continuously evaluate agent quality over time

With these capabilities, employees can use their go-to business apps directly in Copilot and get a simpler way to discover the right agents for their tasks. Meanwhile, IT gains objective signals that help validate agent behavior as usage expands. Here’s what you need to know.

Interacting with apps through chat in Copilot

Today, the gap between AI insight and in-app execution starts to close—without IT needing to relax standards or introduce new risk vectors.

When an employee prompts Copilot and calls an agent connected to an approved app, that agent can bring that app’s interactive experience directly into the conversation. From there, the employee stays in the driver’s seat, using chat in Copilot to take real, in‑app actions such as:

  • Scheduling a new event in Outlook
  • Adding a new sales opportunity to Dynamics 365 Sales
  • Creating or editing a flyer in Adobe Express
  • Completing an approval form via Microsoft Power Apps

All of this happens without needing to leave Copilot. Employees interact with the app directly in chat or use follow-up prompts to carry out work in the app.

Get started quickly with pre-built app experiences

This month, we’re launching support for a focused set of early experiences, including:

  • Microsoft apps, such as Outlook, Dynamics 365 Customer Service (public preview by early April), and Dynamics 365 Sales (public preview by early April)
  • Custom line-of-business apps built with Power Apps (public preview this March)

Take Outlook, for example. You can now tell Copilot who you want to meet with, and it’ll find time slots that work. Simply select one, and an agent will schedule that time together. This experience is currently generally available (GA). Similarly, you can ask Copilot to draft an email on your behalf, edit it, and hit send—without leaving the chat (currently in Frontier).

We will also introduce in-chat experiences for a handful of Microsoft partner apps, including Adobe Express, Adobe Acrobat, Base44, Box, Canva, Coursera, Figma, Miro, Monday.com, Optimizely, and Wix. All pre-built partner app experiences will be available via the Microsoft 365 Agent Store by mid-April.

“With the Figma app in Copilot, you can turn conversations into AI-generated FigJam diagrams to take ideas further,” says Brendan O’Driscoll, Figma’s VP of Product. “By connecting Figma with your favorite tools, it’s easier than ever to visualize, iterate, and collaborate with your entire team.”

Build the app experiences your team needs

You’re not limited to the apps we ship out of the box. Your team can build agents in Copilot that work with the mission-critical apps that your systems, processes, and workflows depend on.

Under the hood, two open extensibility standards make this possible: MCP Apps and the OpenAI Apps SDK. Both give development teams a structured way to connect the apps your organization relies on to agents in Copilot—so those apps can surface interactive experiences directly in chat. Agents built with either standard use familiar development patterns, so your team can build and iterate without requiring a steep learning curve.

MCP Apps and Apps SDK will roll out to GA on web and desktop later this month, with mobile following this spring. Share the Apps SDK and MCP Apps technical documentation with your development team to get started.

Get to know the IT controls

Even as agents become more powerful, we’ve designed this experience with governance in mind. Agents with interactive app experiences use the same governance and admin patterns you already trust for agents in Copilot, keeping IT control the top priority.

You decide which agents are available in your tenant, and who can use them—globally, per agent, or for specific departments. Each agent operates strictly within existing app permissions and identity boundaries, so you can enable richer experiences in Copilot without opening new, unmanaged entry points into your environment.

All agents can be monitored end‑to‑end using Agent 365—a unified control plane that gives IT a single place to see which agents are live, where they can act, and how they’re being used. With it, you can control how agents are provisioned and scoped before rolling out this new experience broadly. Learn how to provision your organization’s agents at scale.

Empowering employees to find the right agent fast

As agents in Microsoft 365 Copilot become more capable, employees need a reliable way to find the right agent for the task at hand. But when dozens of agents are available, employees shouldn’t have to know which one to use when. Agent Recommendations (generally available) surfaces the right agent at the right moment, directly in the flow of work.

When users prompt Microsoft 365 Copilot, the system analyzes their intent and suggests an agent that’s already installed and approved by IT. No special syntax or prompt engineering required.

These recommendations are assistive, meaning employees can choose to start a new conversation with the suggested agent or continue in their current chat. All the while, discoverability only happens within known, governed boundaries —mitigating the introduction of new risks. This helps employees quickly find agents purpose-built for the scenario at hand, while IT maintains a consistent governance model as usage expands.

Holding agents to your organization’s standards

As organizations rely on more agents for more impactful work, quality and reliability stop being nice‑to‑haves—they’re essential. Small changes to prompts, models, or data can introduce drift that can be hard to detect, especially as agent usage expands across teams and scenarios.

Agent Evaluations in Microsoft Copilot Studio (currently in public preview) gives you a structured way to answer the question: Is this agent actually doing what it’s supposed to do?

Evals work by running agents against authentic questions and scenarios, then generating objective scores for accuracy and intent alignment—so quality isn’t just assumed; it’s measured. By comparing results over time, teams can help catch regressions earlier, validate improvements, and apply a consistent quality bar before agents reach broader use.

These signals reinforce that agents aren’t set‑and‑forget automation; they’re managed enterprise workloads. With objective evidence in hand, IT and makers can make informed rollout decisions and scale agent usage more confidently, knowing behavior is monitored, and reliability can be improved as usage grows.

Learn how to set up Agent Evals in Microsoft Copilot Studio, so you can assess agent quality and readiness before expanding usage.

Make agents more capable while staying in control

Support for apps in agents, Agent Recommendations, and Agent Evals are designed to work together as a system, helping organizations move faster—without compromising trust. By treating agents as first‑class, governed workloads, IT teams can enable more capable agents while maintaining the control their organizations expect.

To get started:

  • Learn how dev teams build with Apps SDK and MCP Apps
  • Control agents from end-to-end with Agent 365
  • Discover how to configure Agent Evals

The post Enable agents to bring apps into the flow of work—while keeping IT in control appeared first on Microsoft Copilot Blog.

]]>
Computer-using agents now deliver more secure UI automation at scale http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/computer-using-agents-now-deliver-more-secure-ui-automation-at-scale/ Tue, 24 Feb 2026 17:00:00 +0000 See how new updates to computer‑using agents improve UI automation with secure credentials, detailed monitoring, and scalable Cloud PC capacity.

The post Computer-using agents now deliver more secure UI automation at scale appeared first on Microsoft Copilot Blog.

]]>
When we first introduced computer-using agents (CUAs)—AI systems that can see, understand, and act across web and desktop apps—we showed what was possible: AI that works across applications, just like a person would. Early adopters quickly put CUAs to work automating brittle processes, navigating legacy systems, and stitching together workflows where APIs don’t exist.

Then, customers like you pushed us further.

You told us where agents didn’t scale, where authentication slowed runs, and where it was hard to understand why something failed—or to prove it behaved correctly. You also told us where your organization needed more control, visibility, and flexibility before rolling out computer‑using agents at scale.

Today’s updates are a direct response to that feedback.

Computer‑using agents in Microsoft Copilot Studio now offer more model choice, stronger security and governance, and easier scale—so you can automate more of your work across web and desktop apps with confidence.

Here’s what’s new with computer use—and why it matters.

Choose the right model to navigate dynamic interfaces

Computer-using agents now support multiple foundation models, including Anthropic’s Claude Sonnet 4.5 alongside OpenAI’s Computer-Using Agent. This gives you the flexibility to choose the best fit for each agent, based on the interface and the task.

  • Use OpenAI Computer-Using Agent to orchestrate multi‑step web and desktop flows.
  • Opt for Anthropic Claude Sonnet 4.5 when you need high performance reasoning on dynamic user interfaces (UIs) and interpretation of dense, changing dashboards.

Secure authentication with built in credentials and Azure Key Vault

Authentication shouldn’t be the reason automations stall. Computer use now offers built‑in credentials so agents can:

  • Securely perform website and desktop app logins.
  • Reuse them across multiple agents and automations.
  • Eliminate manual login prompts during runs, enabling unattended execution.

For example, if an agent needs to log into a vendor portal and update a desktop ERP every night, built-in credentials now let the agent authenticate to both the web portal and the desktop app automatically. This removes manual interruptions and makes overnight processing dependable while maintaining governance controls. No need to babysit “unattended” runs.

You can choose between two storage options aligned to your governance needs: internal storage (encrypted in Microsoft Power Platform) for low-friction setup, or Azure Key Vault for enterprise-grade secret management.

Credentials are encrypted and are never exposed to the AI model, so only authorized agents can access them. This way, your security and compliance team can feel confident scaling CUAs to more scenarios.

See every computer-using agent action with session replay and audit logs

As agents touch more business‑critical systems, teams need to know what happened, why it happened, and where.

Computer use now has advanced monitoring and richer observability, so operations, security, and compliance teams can inspect behavior step‑by‑step. This includes:

  • Session replay with screenshots.
  • Step‑by‑step action logs with action types, coordinates, timestamps, and context.
  • Run summaries instruction text, duration, action counts, average time per action, and human escalation counts.
  • Resource tracking including websites, desktop apps, credentials used.
  • Export options for offline review.

But what does this look like in practice? Imagine an agent run produces an unexpected update, and your team can’t tell whether the agent misread the UI, clicked the wrong control, or encountered a hidden pop‑up.

Session replay and action logs now show exactly what the agent saw and did, pinpoint the step where the UI changed, and produce an exportable record for audit review. That way, you can fix issues faster and retain a defensible compliance trail.

Beyond the monitoring pane, compliance is further strengthened through:

  • Microsoft Purview integration, sending audit logs to Purview.
  • Dataverse logging with configurable verbosity—choose All data, Data without screenshots, or Minimal.
  • Retention options from 7 days to indefinite, to match regulatory and governance requirements.

Simplify infrastructure with managed Cloud PCs for computer-using agents

Scaling UI automation shouldn’t require managing fleets of desktops or fragile virtual machines. The new Cloud PC pool, powered by Windows 365 for Agents, provides fully managed cloud‑hosted machines that are Microsoft Entra joined and Intune enrolled, designed for computer use runs and built to scale with demand.

In other words, these Cloud PC pools provide managed capacity for high-volume runs when demand spikes—without the overhead of keeping dedicated hardware patched, available, and idle the rest of the time. This way, your team can handle spikes without over-provisioning hardware.

Note: For evaluation, you can create up to two Cloud PC pools per tenant with 50 hours of free usage for published autonomous agents—making it easier to pilot CUAs at scale before broader rollout.

Extend—don’t replace—your automation

If you’ve built automations with Microsoft Power Automate and RPA, computer use expands what you can automate—especially when:

  • Interfaces change frequently
  • APIs aren’t available
  • Decision logic becomes more complex

Thankfully, you can keep classic RPA for deterministic scenarios with stable interfaces. CUAs then add flexibility and adaptive reasoning where RPA falls short (such as dynamic web apps, shifting layouts, or complex decisioning). After all, the goal isn’t to start over—it’s to modernize and extend what you already have.

For example, say you have an RPA bot that depends on fixed selectors. Historically, it broke each time a web form changed, forcing constant script updates.

Now, the RPA stays the same, while a CUA handles the variable UI portions—navigating changing layouts, interpreting dialogs, and escalating edge cases. The result? Reduced maintenance and improved reliability.

Get started and help shape what comes next

Ready to try computer‑using agents in a US‑based Copilot Studio environment?

  1. Create or open an agent in Microsoft Copilot Studio.
  2. Go to Tools → Add tool → New tool and select computer use.
  3. Describe the task you want the agent to perform in natural language.
  4. (Optional) Choose a model, configure built‑in credentials, and set up a Cloud PC pool for secure, scalable runs.

For deeper guidance, configuration details, and best practices, see the computer use documentation.

Before you go: We’re actively investing in advanced governance, operations, and scale for CUAs—and customer feedback directly informs the roadmap. Tell us what you think of the latest CUA updates today:

The post Computer-using agents now deliver more secure UI automation at scale appeared first on Microsoft Copilot Blog.

]]>
How to evaluate AI agents in Microsoft Copilot Studio http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/how-to-evaluate-ai-agents/ Tue, 03 Feb 2026 17:00:00 +0000 Agent Evaluation in Copilot Studio helps makers move from early optimism to grounded confidence as agents grow in complexity and impact.

The post How to evaluate AI agents in Microsoft Copilot Studio appeared first on Microsoft Copilot Blog.

]]>
When makers first build an agent, their confidence increases as that agent takes shape. A few test prompts. Some promising answers. A sense that things are working. So, they share that agent with their team.

Then, reality arrives. 

The people who use the agent phrase questions differently. Conversations stretch across multiple turns. Context accumulates. Permissions prove table stakes. The right tools need to be invoked. Edge cases appear. Suddenly, the question becomes “can I actually trust how the agent behaves?”

Agent evaluations exist for this exact moment. AI agents do not behave the same way twice. Their responses shift with model updates, data changes, prompts, tools, and context. What works today may drift tomorrow.

Thankfully, agent evaluations reinforce confidence in the agents you build. Let’s walk through how you can make the most of this capability.

What exactly are agent evaluations?

Agent evaluations (or “evals”) are the standardized mechanism that make agent variability visible and manageable. Unlike debugging, evals are not a one-time check or a manual review. It is a consistent process that helps you stay ahead of what could go wrong and improve agent performance over time. 

By running evaluations, makers can launch agents into production knowing how they’ll behave, not how we hope they do. They can also ensure that an agent’s behavior remains stable over time.

As such, every maker should be evaluating all their agents. But this initiative can start with a few quick evaluations that require minimal setup, using default data and default grading to unlock quick signals.

However, as your agents mature, you’ll likely need to evolve this strategy, configuring additional evaluations that test behaviors in specialized scenarios.

Agent evaluation in 8 simple steps

Imagine you’re a maker that just built an internal human resources (HR) agent that helps employees understand leave policies, benefits, and when to escalate to HR systems. 

Here’s how you’d evaluate this agent in Microsoft Copilot Studio, from deciding what to evaluate to understanding real-world behaviors and confidently iterating:

Step 1: Decide what you’re evaluating

Before you can run an evaluation, you need to be clear about what you’re trying to validate. 

This starts with defining the scenario. What kind of behavior are we testing? What assumptions are we making about the user’s intent, the context, and the information the agent has available? A well-defined scenario sets the foundation for meaningful results.

With this information, you’ll need to define your scope. Some evaluations focus on a narrow behavior to get a precise signal. Others cover a wider range of interactions to reflect real usage. A narrower scope makes results easier to interpret, while a broader scope helps surface risks that only appear at scale. 

You’ll need to make these choices deliberately. By explicitly defining the scenario and scope, evaluations produce signals that are relevant, reliable, and aligned with how you expect people to use the agent in practice. And it can impact the success of your evaluation.

Step 2: Ground evaluation in real user behavior 

Once you’ve defined the scope, the next question emerges: “What are we evaluating against?” 

Strong evaluations start with realistic data. Not idealized prompts, but the messy, imperfect ways people actually ask questions. For your HR agent, this includes vague phrasing, partial information, and mixed intents like asking about leave while referencing a personal situation. 

You can bring data from multiple sources, including manually authored scenarios, AI-assisted generation to broaden coverage, imported datasets, and even historical or production conversations.

Add data from multiple sources to ensure agent evaluations capture nuance in its assessment

We recommend starting with a small but meaningful test set, focusing on the high-value scenarios that matter most to your business.

This data ensures that the evaluation inputs reflect real behavior, not the maker’s assumptions. But even with this data in place, you’ll likely ask: “How will this help me judge whether the agent behaved as expected?” This brings us to step three.

Step 3: Define your evaluation logic

Sometimes makers start with default grading to understand baseline behavior, before deciding what they want to measure more precisely. 

Meanwhile, others define more specific grading logic upfront based on what they already know and what they want to validate. 

Evaluation logic does not require full certaienty at the start. It provides a structured way to observe outcomes and refine what matters over time. 

Makers can choose from a collection of ready-to-use graders and even combine multiple graders within a single evaluation to get a richer, multi-dimensional view of agent behavior. 

Graders provide a richer, multi-dimensional view of agent behavior

For example, your HR agent configuration might include three separate graders:

  1. General quality grader to assess whether the response is complete and addresses the full question.
  2. Classification grader, where you describe the expected behavior as using natural language prompts.
  3. Capability grader to confirm the agent uses the right topic or tool at the right time.

Even better, you can make these expectations explicit: what matters, what does not, and what “good behavior” looks like in this scenario. By defining evaluation logic upfront, you’ll reduce ambiguity, make success observable and explainable, and shift quality from subjective judgment to measurable signal. 

Step 4: Set the right identity context 

Once you’ve outlined what you’re testing, you need to define when the evaluation should run. Specifically, which user profile should the agent act like is sending the questions when it’s being evaluated?

The user context you select determines the agent’s behavior, including what data it can retrieve and reason over. It also ensures evaluations catch permission‑related risks early, such as inappropriate data access.

So, making this choice explicit helps avoid a common source of false confidence. When results are reviewed later, makers can trust that successes and failures are grounded in the same access boundaries their users will experience.

For example, an HR agent that references internal policy articles may behave very differently if it’s responding to a full-time employee or a contractor.

Running the evaluation under only the intended user identity ensures evaluation results reflect real conditions rather than an idealized setup. This can help you identify and mitigate unexpected behavior, such as sharing your company’s healthcare options with a contractor.

Step 5: Evaluate the agent’s responses

Now, it’s time to run your evaluation. Based on the data you provided, Copilot Studio simulates real user prompts and the agent generates responses, curated to your prescribed user context. Each configured grader then evaluates a different aspect of the response, such as quality, correctness, or capability.

This evaluation process turns individual answers into structured signals. Together, these signals make agent behavior observable, repeatable, and explainable at scale. 

The maker is no longer relying on intuition or spot checks to assess their agent’s quality. They’ve created a disciplined feedback loop that replaces assumptions with evidence and transforms agent quality from a subjective impression into a measurable outcome. 

Step 6: Step back to see the bigger picture

Once your evals gather sufficient signals, your focus shifts outward: “What does this tell me overall?” 

Aggregated results provide a high-level view of quality, consistency, and trends across scenarios and graders. For the HR agent, this might reveal strong performance on common policy questions, but weaknesses around edge cases or escalation behavior. 

Aggregated results provide a high-level view of agent quality and behavior trends

With these signals, you can better prioritize. Not every failure matters equally. Patterns matter more than anomalies. And evaluation becomes a decision-support tool, not just a reporting surface. 

Step 7: Investigate why single cases pass or fail

High-level signals are useful, but confidence is sturdiest when it’s grounded in the details. 

When a maker drills into a specific test case, explainability comes to the foreground. They can see which grader triggered a failure, how the agent responded across turns, which knowledge sources it used, and whether it invoked the expected tool or topic. 

This is often the turning point. Instead of guessing why something went wrong, you can finally understand what actually happened. Was the agent’s instructions unclear? Was the data incomplete? Did the agent confidently answer the prompt when it should have escalated it? 

With this newfound understanding, you can make informed changes to your agent, adjusting instructions, data, or behavior based on what the evaluation revealed. 

Makers can drill-down into a single use case using Microsoft Copilot Studio's agent evaluations

Step 8: Validate progress through comparison 

Evaluation doesn’t end with a single run and a few gathered signals. Agents change over time. Instructions get updated. Data grows. Tools are added. 

With evaluations as an always-on motion, you can compare runs. You can check whether things are improving and catch regressions early. This ongoing view helps your team answer a simple but critical question: “Are we actually getting better?” 

For your HR agent, evaluations might confirm that an update made to the instructions reduced hallucinations without harming coverage. Confidence is no longer anecdotal. It is earned through evidence. 

Make agent evaluations your confidence loop

Evaluations don’t slow you down. They accelerate progress. Each iteration builds understanding and offers clarity. Each run reduces uncertainty. And each comparison strengthens trust, empowering you to build with confidence.

That confidence is what encourages teams to move from test to production, and from promising prototypes to agents that can be relied on in real business scenarios at scale. 

Ready to run your first agent evaluation? Get tactical guidance for configuring evals in Copilot Studio—complete with best practice evaluation methodologies.

New to Copilot Studio? Discover how you can transform your business by building, evaluating, managing, and scaling custom AI agents—all in one place.

The post How to evaluate AI agents in Microsoft Copilot Studio appeared first on Microsoft Copilot Blog.

]]>
Copilot Studio extension for Visual Studio Code Is now generally available http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/copilot-studio-extension-for-visual-studio-code-is-now-generally-available/ Wed, 14 Jan 2026 16:00:00 +0000 The Microsoft Copilot Studio extension for Visual Studio Code is generally available, so you can build and manage Copilot Studio agents from the IDE you already use.

The post Copilot Studio extension for Visual Studio Code Is now generally available appeared first on Microsoft Copilot Blog.

]]>
If you build agents with the Copilot Studio extension for Visual Studio Code, you already know the fastest way to iterate is to treat your agent like software: version it, review changes, and promote it through environments with confidence. Today, the Microsoft Copilot Studio extension for Visual Studio Code is generally available, so you can build and manage Copilot Studio agents from the IDE you already use.

What you can do with the Copilot Studio extension for Visual Studio Code

As agents grow beyond a few topics and prompts, teams need the same development hygiene they use for apps: source control, pull requests, change history, and repeatable deployments. The VS Code extension brings that workflow to Copilot Studio so makers and developers can collaborate without losing governance or velocity.

The extension supports a simple loop that fits naturally into your SDLC:

1) Clone an agent to your local workspace

Pull the full agent definition from Copilot Studio into a folder on your machine, so you can work locally with the full context of your agent.

2) Edit confidently in VS Code

Make changes to your agent components (topics, tools, triggers, settings, knowledge references) using a structured agent definition format and your existing VS Code workflow. The extension also provides IDE help like syntax highlighting and IntelliSense-style completion so edits are faster and less error-prone.

3) Review changes before they land

Preview what changed, compare cloud vs local, and resolve conflicts before you apply updates. This helps teams avoid overwriting each other’s work and makes collaboration practical at scale.

4) Apply changes back to Copilot Studio

Sync your updates to the cloud to test behavior and create evals as part of your normal iteration loop.

5) Deploy with the processes your team already uses

Use standard Git workflows and integrate agent definitions into automated deployment processes. This is the missing piece for teams that want agents to move through environments with the same rigor as code.

Built for development teams

The extension is designed for the way engineering teams actually work:

  • Standard Git integration for versioning and collaboration
  • Pull request-based reviews so changes are discussed and approved
  • Auditability over time, with a clear history of modifications
  • VS Code ergonomics: keyboard shortcuts, search, navigation, and a local dev loop

This extension is especially helpful if you:

  • Manage complex agents with many topics and tools and need fast search and navigation
  • Collaborate with multiple people and need PR workflows for safe changes
  • Want agent definitions in source control and environment sync through DevOps pipelines
  • Prefer building with your IDE plus an AI assistant for faster iteration

Develop Copilot Studio Agents using GitHub Copilot  

The Copilot Studio extension for Visual Studio Code lets you build and refine your Copilot Studio agent with AI help in the same place you write code. Use GitHub Copilot, Claude Code, or any VS Code AI assistant to draft new topics, update tools, and quickly fix issues in your agent definition, then sync changes back to Copilot Studio to test and iterate. The result is a faster inner loop with fewer context switches and a workflow that fits how development teams already work.

Get started

  1. Install the extension from the Visual Studio Marketplace
  2. Clone your first agent from Copilot Studio
  3. Make a small change locally
  4. Use Apply Changes to sync back to Copilot Studio and test

Learn more and share feedback

We built this extension so agent development can feel like the way software teams already work: in your editor, with source control, and with AI help when you want it. Try it in your next agent update and let us know what you want to see next!

The post Copilot Studio extension for Visual Studio Code Is now generally available appeared first on Microsoft Copilot Blog.

]]>
Why Microsoft Copilot Studio is the foundation for agentic business transformation http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/why-microsoft-copilot-studio-is-the-foundation-for-agentic-business-transformation/ Tue, 18 Nov 2025 16:00:00 +0000 Explore new Microsoft Copilot Studio updates to shape agent behavior, enforce organizational standards, and support agentic business transformation.

The post Why Microsoft Copilot Studio is the foundation for agentic business transformation appeared first on Microsoft Copilot Blog.

]]>

Today’s leading organizations are going through an agentic business transformation. This change takes AI from concept to measurable impact, by automating existing workflows and using agents to enhance productivity and reinvent entire functions. Copilot Studio, Copilot’s agent platform, provides a fully managed solution for accomplishing this.

Using Copilot Studio, organizations around the world can quickly bring the benefits of AI to their business. Copilot Studio empowers companies to streamline and automate their processes with agentic workflows, create single-purpose agents to solve specific problems, and develop multi-agent solutions that drive measurable business outcomes at scale. The result: a scalable, secure, and governable foundation that supports the needs of IT administrators and business owners measuring return on investment (ROI). This system accelerates agentic transformation by delivering speed-to-value without sacrificing quality or control.

At the same time, with Microsoft 365 Copilot, users can easily use AI to improve their personal and team productivity. This tailored experience for Microsoft 365 Copilot users offers a fast, guided way to set up agents to support your work and automate everyday tasks, removing them from your plate.

Today, we’re excited to share new capabilities in Copilot Studio that support all of these scenarios and groups that use our product, making it easier for makers and administrators to shape agent behavior, enforce organizational standards, and extend functionality with AI.

End-user improvements

Our Copilot Studio experience for building agents and workflows, as well as our agent building capabilities in Microsoft 365 Copilot, continue to support agent creation for all users, from professional makers and IT administrators doing enterprise AI transformation, to employees building agents and workflows for their personal use. Recent updates focus on making the process simpler and more efficient.

What’s new in Microsoft 365 Copilot

  • Redesigned creation experience: Build and refine agents through an improved conversational interface that guides users and taps into an expanded set of work-related knowledge sources.
  • File generation with natural language: Agents built in Microsoft 365 Copilot, can now create Word, Excel, and PowerPoint files in seconds using natural language commands.
  • Seamless upgrade path: Copy agents from Microsoft 365 Copilot to Copilot Studio in one click, unlocking advanced AI agent customization.
  • Workflows agent in Microsoft 365 Copilot: Create, build, and manage workflows using natural language in chat. Boost productivity with quick scenarios like daily triage, weekly digests, and lightweight approvals—all directly within Copilot.
Microsoft Copilot Studio shows a user creating an agent named ‘Project Horizon Tracker’ with options to add tools, sources, and configure capabilities while uploading work content for the agent to access.

Maker improvements

IT application developers and other professional makers in the business can already build sophisticated agents in Copilot Studio without needing to code. Copilot Studio includes capabilities such as connecting and acting across more than 1,400 systems of record via Model Context Protocol (MCP), Power Platform connectors, and the Microsoft Graph. It also includes broad and deep tooling like autonomously writing and executing code, delivering rich out-of-the-box agent analytics and ROI measurement, and more, all built on the Microsoft governance and security platform. We’re excited to share new capabilities that give makers even more flexibility and control to design enterprise agents tailored to their unique organizational needs.

  • Choose your own model: Select from leading options like OpenAI’s GPT‑5, Anthropic’s Sonnet 4.5, and Opus 4.1 to power your agents. This empowers you to tailor agent intelligence to fit your specific business scenario, optimize performance, experiment with new capabilities, and deliver agents that meet your organization’s unique needs.
  • Ensure agents are ready for launch, and don’t regress over time, with Evaluations: Built-in evaluation tools help you test agents against real-world scenarios, compare versions, and track performance with clear metrics. Evaluations can give teams greater confidence that their investments are performing as expected.
  • Computer use: Agents can now automate tasks across apps and websites, using secure Windows 365 experiences—from hosted browsers for quick web automation to IT-managed Cloud PC pools for rapid scalability.

Admin improvements

As agents become central to automating work and transforming workflows, Copilot Studio is introducing new governance and protection capabilities designed to help organizations maintain strong oversight.

  • Expanded agent analytics: Clear insights into connected and child agent performance, detailed visibility into Copilot Credits consumption and limits, AI-generated summaries of top analytics insights, and interrogating analytics using natural language.
  • Real-time protection: Copilot Studio integrates with Microsoft Defender and other trusted security platforms, providing continuous monitoring and protection against threats like prompt injection—helping every agent run more safely.
  • Microsoft Entra Agent ID: Every agent made in Copilot Studio now gets a unique Microsoft Entra Agent ID, making it simple to register, manage, and govern your entire agent fleet.

Agent 365 and Copilot Studio: Unified control for agents

Agents are handling more responsibilities across enterprise operations and Copilot Studio is your launchpad for building them. With the introduction of Agent 365—the control plane for agents, the rich governance and management capabilities we offer today including sharing controls, advanced connector policies, agent inventory, zoned environment management, and more, will also be surfaced in the Agent 365 platform when using agents built in Copilot Studio.

Additionally, in Copilot Studio, makers can now build agents that use the new Agent 365 MCP servers. These servers allow agents to schedule meetings in Microsoft Teams, draft documents in Word, send emails in Outlook, and update customer relationship management (CRM) records in Microsoft Dynamics 365. This supports delivery of intelligent, compliant workflows and agents with built-in audit trails and granular policy enforcement—all from one platform.

Agent 365 is available starting today in Microsoft 365 Admin Center with Frontier, Microsoft’s early access program for the latest AI innovations.

Scale to the Frontier Firm with control

True transformation happens when agents are built for scale, governed for compliance, and measured for impact. Copilot Studio delivers that foundation, so organizations can build enterprise multi-agent systems, automate workflows with precision, and reimagine processes while minimizing risk.

EY’s results show what’s possible when you invest in a comprehensive agent platform, built on Microsoft. They are just one of many enterprise organizations implementing agents with Copilot Studio. In this case, their PowerPost Agent built on Copilot Studio led to major improvements in journal processing:

  • 95% reduction in lead time
  • 37% cost savings1

That’s the difference between cobbling together siloed agent platforms versus investing in a managed scalable agent platform like Copilot Studio: agents and agented process design that is repeatable, auditable, and scalable.

Get started today

To learn more about Copilot Studio and how it can transform your organization’s productivity, visit the Copilot Studio website and sign up for a free trial today. Take the Agent Readiness Assessment to benchmark your organization’s agent maturity across five critical areas—strategy, data, process, culture, and security—and get a personalized report to accelerate scalable agent adoption and drive agentic business transformation.

Want to explore all of Copilot Studio’s adoption content? Visit the Copilot Studio adoption page.


1 EY redesigns its global finance process with Microsoft Power Platform

The post Why Microsoft Copilot Studio is the foundation for agentic business transformation appeared first on Microsoft Copilot Blog.

]]>
What’s new in Copilot Studio: October 2025 http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/whats-new-in-copilot-studio-october-2025/ Mon, 10 Nov 2025 18:00:00 +0000 In this edition of our monthly roundup, we’re recapping new features released in Microsoft Copilot Studio in October 2025.

The post What’s new in Copilot Studio: October 2025 appeared first on Microsoft Copilot Blog.

]]>

In this edition of our monthly roundup, we’re recapping the most exciting new features Microsoft Copilot Studio released in October 2025.

Build and optimize agents

Validate agents at scale with evaluations for automated testing

Agent quality just became significantly easier to measure and improve. With the automated agent evaluation experience, now available in public preview, makers can systematically test and validate their Copilot Studio agents at scale. Instead of running scenarios one by one, they can build and execute evaluation sets directly from the agent or the Test Pane, delivering structured, repeatable insights both before and after publishing.

This new experience offers flexibility in how evaluation sets are created. Makers can upload files with predefined questions and answers, reuse recent Test Pane queries, add cases manually, or instantly generate queries using AI. This approach ensures that test coverage spans organization-specific scenarios while also incorporating AI-suggested questions based on agent metadata and topics, providing a comprehensive view of performance.

Evaluations are powered by a robust grader framework that gives makers control over how accuracy is measured. Options range from strict checks such as Exact Match and Partial/Contains, to semantic comparisons like Similarity and Intent Match, and even AI-powered metrics including relevance, completeness, and groundedness. Each test delivers clear pass/fail results, detailed scores, and drill-down views into the knowledge and topics used.

For cases where reference answers are critical, makers can define expected responses manually or upload them in bulk, ensuring evaluations remain precise, transparent, and aligned with business expectations. AI further the Analytics tab in Copilot Studio accelerates validation by automatically generating test sets that can be executed immediately with AI metrics graders or combined with manual and uploaded sets for broader coverage.

These capabilities introduce a scalable, repeatable framework for agent quality, helping teams identify gaps early, reduce surprises in production, and track improvements over time. While multi-turn testing and additional graders are on the roadmap, this public preview represents a major leap forward in automated validation. 

Evaluations are available now in public preview. You can access them from the agent or test pane by selecting Evaluation.

Build with the latest OpenAI models in Copilot Studio

Copilot Studio continues to evolve with new model updates that improve performance and expand flexibility for makers. Depending on use case and application, different models may provide better responses to users. We’re committed to providing model choices that work for your business processes.

Starting October 27, 2025, GPT-4.1 became the default model for all newly created agents, replacing GPT-4o. Testing shows meaningful gains in both latency and response quality, helping agents deliver faster, more consistent results. GPT-4o will remain available through November 26, 2025, and agents in production will continue to leverage this model until then. However, you can update the model and opt in to GPT-4.1 today through the model-selection experience.

In addition, Copilot Studio is expanding availability of the GPT-5 family of models, first introduced in August 2025. Makers can now use GPT-5 Auto, GPT-5 Chat, and GPT-5 Reasoning not only in test environments but also in deployed agents. These models bring enhanced reasoning, richer dialogue capabilities, and more flexible problem-solving for complex scenarios. Please note that GPT-5 models remain in public preview and are not yet recommended for production use.

Together, these updates give makers access to the latest OpenAI advancements while maintaining continuity for existing agents. You continue to have top model choice at your fingertips to help create and deploy more accurate and effective agents at scale.

Speed up agent flow execution with express mode

Flow execution just got faster in Copilot Studio. Express mode, now in preview, optimizes agent flows to increase the likelihood that they’ll finish the flow within two minutes. This avoids agents or apps timing out while they wait for a response.

Express mode works best in flows that are logic-heavy but data-light. It limits flows to under 100 actions and smaller payloads so that the entire execution is more streamlined. For scenarios where large data sets needs to be moved or loops occur to iterate over large arrays, makers should test both with and without express mode.

This feature is in public preview and on by default. You can find the express mode toggle located on the flow’s Overview page in the editor.

Enable file uploads in omnichannel conversations

Copilot Studio now supports file uploads for custom agents in omnichannel scenarios. This means users can share images, documents, and other supported file types directly during agent interactions. This enhancement makes conversations more dynamic and context-rich by letting customers provide relevant files like receipts, forms, or photos right in the chat.

By enabling end user file uploads, agents can analyze attachments in real-time and deliver more accurate, personalized responses. This is a critical capability for customer service and contact center scenarios, where exchanging documents or screenshots is often key to resolving issues quickly. The feature also unlocks richer use cases for image analysis and document-based reasoning, improving both response quality and customer satisfaction.

File upload support is enabled by default for omnichannel custom agents, with optional controls available for agent makers to restrict supported file types in the agent manifest. All file types supported by Microsoft 365 Copilot are allowed up to 5MB (unless admins add restrictions).

This update enhances both the maker and end-user experience, and brings a richer more comprehensive level of service for end users relying on the agent for support.

Access external files and data with Model Context Protocol resources

Copilot Studio now supports Model Context Protocol (MCP) resources, expanding what agents can do with existing MCP connections. Makers have been able to use MCP tools to trigger actions and retrieve information. Now with resources support in preview, agents can read external content like files, API responses, or database records directly through MCP. This brings richer, real-time context into every interaction.

MCP resources act as file-like data objects that agents can query and reference during conversations. This allows agents to access customer-specific or system-specific content dynamically, without manual updates or re-training. For example, an agent could read the latest policy document stored in an MCP resource, summarize an uploaded file, or use current data from an API—securely and in context.

This enhancement builds upon the existing MCP integration in Copilot Studio, supporting deeper connections between agents and the systems they support. MCP resources are available now in public preview and are on by default for supported environments.

Measure and improve performance

Measure the return on investment (ROI) for conversational agents

Organizations can now view the ROI of conversational agents in Copilot Studio to calculate how much time and money the agent saves compared to other methods. Already available for autonomous agents, this enhancement, now generally available, gives teams a unified view of how all agent types drive direct business impact.

From the Analytics tab, makers can configure savings settings for each agent. This is where you define how much time or cost is saved per interaction or workflow. Copilot Studio then aggregates these metrics automatically. The resulting ongoing view helps quantify the business value agents deliver through reduced manual effort, faster resolutions, or process efficiencies.

By expanding savings analytics to include conversational agents, Copilot Studio helps organizations evaluate agent performance and impact consistently across their agent portfolio. With this capability, right inside the Analytics tab in Copilot Studio, makers can make data-driven decisions about where to invest and improve.

Analyze user questions by theme

Copilot Studio now helps makers understand agent performance by intelligently and automatically grouping user questions into themes. The themes give you category-level insights into customer intent and frequent topics, with a more manageable number of groups.

In the Themes list, you can see key metrics such as question volume, response rate, and user satisfaction. This at-a-glance overview makes it easier to see which topics your agent handles well and focus on areas where it may need refinement. Makers with the appropriate permissions can also drill down into each theme to review specific user questions, agent responses, and related metrics. This deeper visibility helps identify patterns in user intent, uncover gaps in coverage, and guide targeted improvements to knowledge and content.

The feature is automatically available for agents that use generative answers and have received at least 50 user questions within the past seven days. Once enabled, insights appear directly in the analytics dashboard, no further setup is required.

By organizing user questions into themes, Copilot Studio gives makers a clearer view of what customers are asking for and how effectively agents are responding. This helps the team continuously improve agent responses for their customers by making data backed improvements to their knowledge sources.

Test and debug faster with an improved activity map

Test and troubleshoot Copilot Studio agents faster and more intuitively, thanks to a series of updates to the activity map and testing experience. These enhancements create a more cohesive view of how agents reason over data and user queries to respond. That, in turn, helps makers debug efficiently and refine performance with less context switching.

Makers can now view the transcript and activity details together, eliminating the need to toggle between separate views. This unified view provides a clearer picture of how each session unfolds, drawing from user input through the agent’s reasoning and response generation. The updated layout also lets makers pin sessions, adjust visible columns, and submit feedback on session details directly to Microsoft—improving collaboration and visibility.

It is now easier then ever to navigate activity data, understand the agent’s chain of thought, and connect analytics insights to individual sessions for deeper evaluation. These enhancements are generally available, with continued refinements releasing progressively across environments.

Manage and govern at scale

Control org-wide sharing of agents in Copilot Studio lite

A new admin control in the Microsoft 365 Admin Center, now generally available, gives organizations stronger governance over how agents created in Microsoft 365 Copilot are shared across the tenant. Admins can now restrict or disable organization-wide sharing of agents built in Copilot Studio lite (formerly known as the agent builder). This ability helps prevent oversharing while supporting safe adoption at scale.

From within the Microsoft 365 Admin Center go to Copilot > Settings > Data Access > Agents page, admins can choose who is allowed to share agents with the entire organization: all users (default), no users, or specific users and groups. When you place restrictions on sharing, the “Anyone in your organization” option in the agent-sharing dialog is disabled. Makers can see a tooltip explaining the policy. Existing access remains unchanged, but makers must comply with the defined settings before updating or broadening sharing.

This control helps ensure that agent collaboration aligns with organizational policies and regulatory requirements. This is particularly important for organizations in spaces like finance, healthcare, and government. By bringing this configuration directly into the Microsoft 365 Admin Center, admins can manage agent governance alongside other Microsoft Copilot and AI settings, simplifying oversight and reducing risk.

Stay up to date on all things Copilot Studio  

Check out all the updates live as we ship them, as well as new features releasing in the next few months here: What’s new in Microsoft Copilot Studio

To learn more about Microsoft Copilot Studio and how it can transform your organization’s productivity, visit the Copilot Studio website or sign up for our free trial today.

The post What’s new in Copilot Studio: October 2025 appeared first on Microsoft Copilot Blog.

]]>
What’s new in Copilot Studio: June 2025 http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/whats-new-in-copilot-studio-june-2025/ Thu, 10 Jul 2025 16:00:00 +0000 In this edition of our monthly roundup, we're recapping new features in Microsoft Copilot Studio that were released in June 2025.

The post What’s new in Copilot Studio: June 2025 appeared first on Microsoft Copilot Blog.

]]>

In this edition of our monthly roundup, we’re recapping new features in Microsoft Copilot Studio that were released in June 2025.

Copilot Studio agent builder enhancements in Microsoft 365 Copilot

Bring new knowledge to agents created in the agent builder

For many of us, critical knowledge is fragmented across various sources. It’s easy to find yourself digging through endless Microsoft Teams chats, email threads, and scattered SharePoint sites and documents to find the right piece of information. Emails hold key decisions and plans, Teams chats capture daily collaboration and contexts, and valuable files seem to be lost forever at the moment you need them most.

That’s why we’ve introduced powerful new knowledge sources in the Microsoft Copilot Studio agent builder embedded in Microsoft 365 Copilot. Your agents can now reference Outlook emails and Teams messages—including group chats, channels, meeting chats, and even files you upload directly.

Screenshot of the knowledge section in the Copilot Studio agent builder in Microsoft 365 Copilot

Whether you’re in IT building a support agent trained on policy docs, internal email threads, and technical chat discussions, or a project manager automating reports pulled from daily team messages and inbox updates, you can now give your agents the ability to access a broader set of sources. 

To make adding knowledge easier than ever, the builder also suggests relevant knowledge sources based on your recent activity and queries. This helps you identify and connect the right data more quickly. With just a few clicks, you can equip your agents with the same knowledge and context your team relies on to get work done.

These features are generally available now for all users with a Microsoft 365 Copilot Chat license or an active pay-as-you-go plan. Open the agent builder and try them out today, and infuse your agents with even more knowledge to help you move faster, work smarter, and stay informed. Learn more.

Organize knowledge and guide agent responses with file grouping

Earlier this year, we introduced file collections, which give makers the ability to upload multiple files as knowledge sources for their agents. Now, we’re expanding that functionality with file grouping, a new capability (now in preview) that helps you better organize files and fine-tune how your agent uses them.

With file grouping, you can now name and describe your file sets and, more importantly, provide group-specific instructions that guide your agent’s retrieval behavior. This makes it easier to tailor responses based on use case, document type, or even user role without needing to change the content itself.

Screenshot of “Upload files” in the knowledge tab of Copilot Studio with the “Group these files” toggle highlighted

For example, you might group together internal HR policies, product-specific documentation, or region-specific guides, and tell the agent to only reference that group when certain variables are met. It’s a more structured, transparent way to manage uploaded knowledge, and it makes your large Microsoft Dataverse knowledge bases easier to navigate and control.

To try it out, go to “Add Knowledge” in Copilot Studio and upload two or more files. You’ll see a toggle to enable grouping, along with options to name the group and set instructions. Learn more.

Copilot Studio enhancements for makers

Build and configure powerful agents with the new Tools experience

Tools give agents their power and determine what tasks they can handle as well as what actions they can perform. A new, streamlined Tools tab experience now makes it easier for makers to discover, configure, and manage everything agents need to get work done.

You can now access all your tools in one place. Whether you’re connecting to Outlook, SharePoint, SAP, Snowflake, custom connectors, or Microsoft Power Automate flows, you’ll now find everything in a single, unified view. This includes custom connectors and Model Context Protocols (MCPs) built for your organization.

Screenshot of the Copilot Studio Tools tab

You’ll also enjoy a more intuitive experience for discovering and adding prebuilt connector actions. Using prebuilt connector actions means you don’t have to build your own from scratch. With just a few clicks, you can set up agents to perform meaningful actions—like sending an email, updating a record, or scheduling a meeting—across Microsoft 365 (including Teams and Outlook), Dataverse, and other services.

Exploring available tools is easier, too. You can see all tools associated with each connector, helping you determine how these capabilities can work together across systems.

Tool configuration has also improved, with enhanced input widgets like calendar controls, file pickers, and time zone selectors. The tab also integrates with IntelliSense to help you configure tools more efficiently, and if a tool is not performing the way you expect, there’s now improved debugging tools like error messages. This will make it much easier to quickly identify and resolve issues with your tools like flows, connector actions, MCPs, and more.

Altogether, the new Tools experience gives you a more centralized, intuitive view that reduces friction when connecting tools, knowledge sources, topics, and flow. All this is designed to make it easier to build the agents you want, with the capabilities they need. Learn more.

Add logic and test your prompts directly inside the prompt builder

The prompt builder tool in Copilot Studio allows makers to create reusable, modular prompts that guide an agent to perform a specific task, such as summarizing, classifying, or extracting information. Assigned to agents or flows, prompts enhance reasoning and output quality for the user. The prompt builder tool has two new features designed for you to enrich and validate your prompts without ever leaving the editor.

You can now add Microsoft Power Fx logic directly inside your prompt inputs. This lets you dynamically calculate values, format or search text, manipulate collections, and more. You can do all this without setting up separate variables. For example, you can insert the current date, clean up messy input, or reference memory tables to ground the prompt with relevant context. Just type “/” or use “+Add content” to bring in a formula. It’s all the flexibility of Power Fx, now right where you need it. Learn more.

We’re also introducing prompt evaluation and testing in prompt builder, now in public preview. You can upload or generate test cases and run them in batches. Choose evaluation types like semantic similarity or JSON match, then review accuracy scores and detailed results. Each test run is saved so you can track performance over time. It’s a simple, scalable way to improve prompt quality without any custom tooling or manual experimentation required. From the prompt list, simply click the “…” menu and choose “Test” to get started. Learn more.

Screenshot of test results from a prompt evaluation inside the prompt builder

These updates turn the prompt builder into a more powerful end-to-end workspace. Whether you’re creating prompts that drive critical business workflows or experimenting with new agent behaviors, you can now iterate faster, catch issues earlier, and deliver more reliable outcomes with less guesswork. It’s everything you need to go from idea to tested, production-ready prompt, all in one place.

Validate and extract text with Power Fx regular expressions

Parsing and validating text just got a major upgrade in Copilot Studio. You can now use industry-standard regular expressions in Power Fx formulas to match, extract, and work with complex text patterns.

Regular expressions (regex) offer a compact and powerful way to describe how a text pattern should be matched. They allow you to perform text validation and extraction. If your agents rely on pulling out details from user input, like extracting tracking numbers, validating email addresses, or calculating durations from structured text, this update is for you. Makers who work with integrations, dynamic data inputs, or more advanced business logic will especially benefit from the precision and flexibility that Power Fx regular expressions provide.

Before this update, these kinds of tasks required long chains of text functions or limited entity configurations. Now, with support for IsMatch, Match, and MatchAll, you can achieve the same results with less effort. You can even reuse expressions across the Microsoft Power Platform suite.

Screenshot of MatchAll regular expression in Power Fx

Regular expressions may seem intimidating if they’re new to you, but the results are powerful. Find full details, syntax tips, and examples on Microsoft Learn. Then, start exploring the many ways Power Fx regular expressions can enhance your agents’ functions and responses. Learn more.

Streamline connections with in-chat single sign-on (SSO)

For users, setting up connections to services used by knowledge, tools, topics, and agent flows is also easier thanks to in-chat single sign-on (SSO), now in preview. When a user needs to access additional services that require credentials during an agent chat, the agent can automatically authenticate those other services. Instead of opening a separate Connection Manager page and jumping between windows, the agent prompts the user with an adaptive card for one-click approval. The agent makes the connection on the user’s behalf and uses that connection to continue responding to the user’s prompt.

Use any Copilot Studio-supported language with generative orchestration

Generative orchestration in Copilot Studio is now available in all Copilot Studio-supported languages beyond English (US). This is a major step toward making advanced AI capabilities accessible to makers and users around the world.

Screenshot of the agent creation page in Copilot Studio showing the user choosing Spanish as the primary agent language

Until now, generative orchestration was only available for agents built in English (en-US). With this update, you can build agents in your preferred language and still take full advantage of features like natural language reasoning, dynamic workflows, and data-aware actions. You can also test and validate orchestration behavior in different languages, helping ensure your agent responses are high quality and contextually accurate across languages before you publish.

If you’re building connected agents, there’s more good news: when a handoff occurs from one agent to another, the orchestrator seamlessly continues the conversation in the user’s language—no additional language configuration is required in the target agent. This makes it easier to scale multilingual solutions without duplicating setup across your connected agent ecosystem.

This update is now in public preview across all regions and languages supported by Copilot Studio. To try it out, simply enable generative orchestration in your agent. You can either set a non-English primary language when creating the agent, or add additional languages via settings once orchestration is turned on. Learn more.

Analyze knowledge use in autonomous agents, insights for unanswered questions

Agent analysis just got more insightful in Copilot Studio thanks to two exciting new features. These insights help you better understand how your agents perform and identify areas where there is room to improve.

Agents with autonomous capabilities (through triggers) have provided run history analytics in the Activity tab. Now, makers can garner additional new insights with knowledge source analysis. This feature, now generally available, lets you see how your autonomous agents and agent flows used the knowledge they’re grounded in during each run. These metrics help you assess relevance, adjust content, and improve performance over time. It’s a valuable new lens into what’s working inside your autonomous agents while they’re working for you.

Makers can also now view where user questions went unanswered. This happens either because the agent didn’t have the knowledge available to identify an appropriate response, or because there was not a topic configured to handle the query. Now, the list of unanswered questions is categorized into themes and conversation contexts. No more combing through old conversations or building more spreadsheets—the Copilot Studio Analytics page automatically highlights patterns and clusters these topics where responses fell short. These insights, found in the Knowledge section on the page, make it easier to close content gaps and continuously refine your agents. This feature is currently in preview and will appear automatically for qualified customers.

Screenshot of unanswered questions analysis and insights in the Usage tab

These two features help you turn metrics into a data-driven feedback loop. Pinpoint what’s working, where content can improve, and how to make your agents more effective over time. With every run and agent interaction, you can gain deeper actionable insights to help you improve your agent.

Microsoft Power Platform admin center enhancements for admins

Track usage, manage agents, and allocate capacity

As your organization builds more agents in Copilot Studio, keeping track of them and understanding how they’re used becomes increasingly important. With new visibility and management tools in the Power Platform admin center (PPAC), admins can now support agent solutions at scale with far less manual effort.

The new agent inventory view provides admins with a tenant-wide list of all agents built in Copilot Studio. You’ll find key details like agent name, environment, owner, creation date, and status all in one place. No more checking each environment manually or stitching together spreadsheets. This supports smoother lifecycle management and more consistent agent experiences across teams. Learn more.

Screenshot of an agent inventory list inside the Power Platform admin center

The new agent usage analytics experience takes this a step further. Admins can now explore tenant-level data that shows how agents are being used across the organization, including usage trends, billing metrics, and a curated list of top agents. It’s a major step forward in tracking usage and enabling cost control practices at the organization level—helping IT teams see what’s working, what’s underused, and where support is needed. Learn more.

Both features are available in the PPAC today for tenant and environment admins in commercial regions. To get started, enable tenant-level analytics in your environment. Then go to the PPAC, select the “New admin center,” and navigate to Manage > Copilot Studio.

Stay up to date on all things Copilot Studio 

Check out all the updates live as we ship them, as well as new features releasing in the next few months here: What’s new in Microsoft Copilot Studio – Microsoft Copilot Studio | Microsoft Learn

To learn more about Copilot Studio and how it can transform your organization’s productivity, visit the Copilot Studio website or sign up for our free trial today.

The post What’s new in Copilot Studio: June 2025 appeared first on Microsoft Copilot Blog.

]]>
Model Context Protocol (MCP) is now generally available in Microsoft Copilot Studio  http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/model-context-protocol-mcp-is-now-generally-available-in-microsoft-copilot-studio/ Thu, 29 May 2025 15:30:00 +0000 MCP now includes a new set of features and enhancements that support more robust and scalable deployments: tool listing, enhanced tracing, and more.

The post Model Context Protocol (MCP) is now generally available in Microsoft Copilot Studio  appeared first on Microsoft Copilot Blog.

]]>
When we introduced Model Context Protocol (MCP) integration in public preview, our goal was simple. We wanted to provide a standard, reliable way to allow you to bring your external data tools and knowledge into Microsoft Copilot Studio, offering easier integration and greater flexibility.  

Today, we’re thrilled to announce the general availability of MCP integration in Copilot Studio! With MCP, you can add AI apps and agents into Copilot Studio with just a few clicks, empowering makers to seamlessly integrate Copilot Studio with existing knowledge sources and APIs.

By connecting to an MCP server, agents are instantly equipped with the latest actions and information – automatically updated as systems evolve. The result is a faster, smarter way to build and scale agents, dramatically cutting down on manual upkeep and accelerating innovation.

New enhancements and capabilities

Now that MCP is generally available, it includes a new set of features and enhancements that support more robust and scalable deployments: 

  1. Tool listing: The MCP server settings page now provides users a clear and organized view of all available tools included with the MCP server. This improvement increases transparency, making it easier to explore and manage the full range of tools associated with your connection. 
  2. Streamable transport: We’ve expanded our transport layer to support streamable data transfer. The focus is on optimizing and staying up to date with the latest protocol version that aligns better with your deployment needs. Note that since SSE transport has been deprecated, SSE transport support in MCS will remain in public preview. 
  3. Enhanced tracing and analytics: To provide deeper insights and improve debugging capabilities, we’ve significantly enhanced our tracing and analytics tools. The activity map in Copilot Studio will now allow you to see which MCP server and specific tool within was invoked at runtime. 
  4. Quality improvements: As part of our ongoing commitment to enhancing the user experience, we’ve implemented several quality improvements across the platform. These improvements include performance optimizations, bug fixes, and reliability enhancements. All these help your deployments run more smoothly and efficiently, with fewer interruptions. 

Why use MCP?

MCP simplifies the integration with AI apps and agents, empowering makers to: 

  • Seamlessly integrate with data sources: Whether you’re using internal APIs or third-party services, MCP ensures dependable and straightforward integration within Copilot Studio. 
  • Leverage a library of remote MCP servers: Beyond building custom integrations, users can access a growing marketplace of certified MCP servers. This makes it quicker and easier to connect with various tools. 
  • Enable dynamic and adaptable functionality: MCP servers can supply tools and data to agents in real time. This offers more flexibility while minimizing the effort and cost of integration and maintenance. 

For example, integrating a banking MCP into an agent instantly unlocks a range of additional capabilities – without the overhead of manually configuring each action.

A diagram of a banking agent

To get started, access your agent in Copilot Studio, select ‘Add a Tool,’ and search for your MCP server! (Note: generative orchestration must be enabled to use MCP.) Integrating your own Model Context Protocol (MCP) server with Copilot Studio involves three main steps:

  1. Build the server: Start by creating a server using one of the available SDKs. This server acts as the core for managing your tools and data. You can customize it to meet your specific requirements – such as supporting custom data formats, model types, or unique workflows.
  2. Create a connector: After the server is ready, the next step is to build a custom connector that links your MCP server to Copilot Studio.
  3. Connect and use in Copilot Studio: Once your connector is in place, you can begin accessing your data and interacting with your models directly within Copilot Studio.

We hope the new improvements enhance your experience as you explore MCP. We’re looking forward to your valuable feedback as you continue to use and grow with the platform. 

For more information about MCP and to stay on top of coming updates, bookmark the following links: 

The post Model Context Protocol (MCP) is now generally available in Microsoft Copilot Studio  appeared first on Microsoft Copilot Blog.

]]>
Announcing new Microsoft Dataverse capabilities for multi-agent operations http://approjects.co.za/?big=en-us/microsoft-copilot/blog/copilot-studio/announcing-new-microsoft-dataverse-capabilities-for-multi-agent-operations/ Tue, 20 May 2025 15:00:00 +0000 Microsoft Dataverse has a multitude of data tools, knowledge tools, and AI tools available to help organizations manage powerful agent ecosystems.

The post Announcing new Microsoft Dataverse capabilities for multi-agent operations appeared first on Microsoft Copilot Blog.

]]>
Microsoft Build 2025 is underway, and Microsoft Copilot Studio has already announced many new and exciting features. Behind the scenes, many of these powerful agent capabilities rely on Microsoft Dataverse—the secure, scalable agent platform that extends agents with enterprise data.

As agents evolve to handle more complex, business-critical tasks, they need to be equipped with a new spectrum of tools:

  1. Data tools to operate human-agent teams for specific business processes
  2. Knowledge tools to retrieve the right context at the right time
  3. AI tools to enable complex reasoning and customizable actions

Dataverse is the platform that brings all three together—powering the data, context, and intelligence that agents need to transform business processes with AI. Over the past few months, Dataverse has seen incredible innovation in each of these areas. I’m excited to share a number of new features designed to upgrade agent interaction and performance—whether you’re orchestrating workflows across systems, grounding agents in enterprise knowledge, or unlocking more intelligent and adaptable behavior.

Data tools to operate human-agent teams for specific business processes

Dataverse: the operational database for agents

Dataverse provides the platform for organizations to store, manage, and orchestrate business and operational data across your agent ecosystem. With Dataverse under the hood, Microsoft Copilot Studio makers can deploy agents to handle logic-driven, adaptive tasks at scale while maintaining human oversight where needed.

An infographic of Microsoft Dataverse's major pillars: Built for agentic systems; Transform common data into actionable knowledge; and Enable complex reasoning and customizable actions

Dataverse creates a cohesive environment where you can apply generative AI atop your business data and operational data in Dataverse with features like AI-powered Dataverse search, Tools, prompt columns, and Model Context Protocol for seamless, connected operations.

Prompt columns, for instance, bring generative AI directly into Dataverse tables with columns whose values are generated by a prompt. For example, in a product review table with “Review Text” and “Sentiment” columns, you can set “Sentiment” as a prompt column that uses AI to evaluate the review text which returns a value of “Positive” or “Negative” in the column. You can also reference other fields in the row, creating dynamic business logic powered by natural language prompts.

A screenshot of a computer

These increasingly sophisticated data-level features lay the groundwork for next-generation, business-aware, agent-first systems. Imagine how you could transform your organization with human-agent teams, with people leading workflows and agents performing tasks.

From an Invoice Processing Agent capturing structured invoice data for later review, to a claims processing system where an autonomous agent handles intake while a chat assistant interacts with users and a person approves the claims, Dataverse is the trusted common data platform for both operational data and business data that these human-agent teams use for business workflows.

A diagram showing multiple agents - autonomous and human-interacting - both using Dataverse as a single operational data source
A diagram showing multiple agents—autonomous and human-interacting—both using Dataverse as a single operational data source

To learn more, watch the Microsoft Build 2025 on-demand session: Dataverse for agents.

Dataverse Model Context Protocol server

The Dataverse Model Context Protocol (MCP) server, now in public preview, makes your business data interactive, turning structured Dataverse information into dynamic, queryable knowledge for Copilot Studio agents. For both developers building advanced workflows and makers configuring intelligent experiences, MCP helps make data conversational and usable.

Screenshot of available MCP servers, highlighting the Dataverse MCP server

Once connected, the Dataverse MCP server enables four key capabilities:

  1. Query: Discover available tables, explore schema, and retrieve real-time data via structured or natural language queries
  2. Knowledge and search: Let agents chat over your data, search knowledge sources, and deliver contextual answers without brittle configurations
  3. Upload (Create/Update Records): Insert new records or update existing ones in Dataverse, with schema-aware mapping to maintain data integrity
  4. Generate with grounding prompts: Run custom prompts grounded in real business context (e.g. summarizing a record, evaluating sentiment, or drafting a tailored response)

Exposing your Dataverse environment through an MCP server brings your enterprise data to life by giving your agents the ability to reason across structured data, take informed actions, and generate meaningful outputs while honoring your data model and access controls.

Enterprise data integration across Copilot Studio, Microsoft Fabric, and Microsoft 365 Copilot

Dataverse knowledge in Copilot Studio, now generally available

Agents are only as good as the knowledge they can access, and now AI-powered Dataverse continues to be the backbone of the Dataverse knowledge in Copilot Studio. Dataverse knowledge connects structured and unstructured data from across your organization—including Dynamics 365, Power Platform, and external systems—into a unified, context-rich knowledge network that agents can reason over and act on, including in prompts. New improvements include support of multi-line text and file type columns in Dataverse knowledge.

Use near-real-time data warehousing with Microsoft Fabric

All your data in Dataverse is pre-indexed and ready for near-real-time analytics. Whether it’s critical business data from Dynamics 365, custom Power Apps applications, or configuration and response data from agent interactions, everything is stored securely in Dataverse and instantly available for analysis.

With data constantly updating, Dataverse maintains a near-real-time data warehouse that you can explore using Data Agents in Microsoft Fabric. With just a few clicks in the Power Apps maker portal, you can link Dataverse to Fabric, helping you unlock deep insights on your data.

To make it even easier for Fabric data professionals to get secure access to Dataverse, we introduced Mirrored Dataverse in Fabric. This feature goes to public preview in June 2025.

Gif showing Mirrored Dataverse in action

Dynamics 365 data now available in Microsoft 365 Copilot

Customers can now search and reason over Dynamics 365 data, comprising contacts, accounts, leads, opportunities, and cases, directly within Microsoft 365 Copilot. This new integration brings business and productivity data together, allowing users to glean insights, take action, and stay in the flow without switching contexts.

Screenshot of a Copilot agent pulling Dynamics 365 data in a chat using Dataverse

Previously, users had to rely on a dedicated agent or navigate to Dynamics directly. With this feature (currently in private preview), you can ask questions and complete tasks using both Microsoft Office and CRM system data in one Copilot experience. Join the early access program and help shape what’s next.

Knowledge tools to retrieve the right context at the right time

The knowledge platform in Microsoft Copilot Studio, powered by Dataverse, allows customers to seamlessly use their own data—such as local files and Dataverse tables—as knowledge sources to build intelligent, context-aware agents grounded in their proprietary content. Now it’s easier than ever to unify and operationalize knowledge across your agent ecosystem.

New knowledge sources and connectors

We continue to expand your agents’ reach with new knowledge sources like Snowflake, SAP, Databricks, Confluence (cloud only), OneDrive for Business, and SharePoint Lists. Support for Salesforce, Zendesk, and ServiceNow now includes unstructured content, such as knowledge base articles. For greater functionality, Dataverse and uploaded files now support image extraction, multilingual content, and querying embedded tabular data. Finally, Azure AI Search—a proven solution for information retrieval in Dataverse’s Retrieval Augmented Generation (RAG) architecture—is now generally available as a knowledge source.

A diagram of Retrieval Augmented Generation (RAG) patterns in Dataverse

Enhanced Power Platform connector Software Development Kit (SDK)

The new enhanced Power Platform connector SDK, now in preview, makes it easier to bring structured external data into Power Apps, Dataverse, and Microsoft Copilot Studio. With the SDK, connectors can expose structured data like full tables and metadata—not just raw APIs. That means that makers can bind tables directly to user interface controls in Power Apps, apply familiar Power Fx functions like Sort and Filter, and ground Copilot agents with business data as a knowledge source, not just an action.

For example, a Databricks connector (public preview in June 2025) built with the enhanced SDK lets makers surface tables in apps and agents automatically. Power Apps understands the schema, and Copilot Studio can answer questions based on the data without manual configuration. Makers and developers can use structured connectors built by software development vendors, or create their own using the SDK.

To learn more, watch the Microsoft Build 2025 on-demand session: Knowledge in Copilot Studio.

AI Tools to enable complex reasoning and customizable actions

Centralized Tools hub in Copilot Studio

The new Tools tab in the left navigation of Copilot Studio gives makers a centralized place to create and manage reusable functionality across all agents in an environment. Using tools, an agent can take actions in external systems (not just read data from them).

Screenshot of the Tools tab in Copilot Studio, using Dataverse

There are six tool types in Copilot Studio, rolling out to public preview in June 2025:

  • Model Context Protocol, which allows users to connect with existing knowledge servers and data sources directly within Copilot Studio
  • Agent flows, which makers can use to automate deterministic and repeatable workflows
  • Computer use, which allows agents to navigate and interact with web and desktop applications
  • Custom connectors and REST APIs, which both allow makers to connect to third-party systems that aren’t available as prebuilt connectors
  • Prompts, which let makers create AI-powered instructions for smarter agents, flows, and apps. With new support for Power Fx expressions, prompts can also perform data transformations like calculations, formatting, or text manipulation

To learn more, watch the Microsoft Build 2025 on-demand session: Tools for agents.

New autonomous agents to put to work faster

Autonomous agents can be a huge boon for your business processes, but building them from scratch takes time…so don’t start from zero. To help makers move faster, we’re introducing three new managed agents, available in preview now in the Create tab of Copilot Studio:

The Document Processor Agent is a robust, out-of-the-box solution for automating document workflows. Once installed, it monitors an email inbox for attachments, extracts key information from incoming files, and exports structured data to a target system. When needed, it seamlessly requests human validation, routing documents to assigned reviewers and tracking progress through an integrated monitoring app. Notifications are sent via Teams or Outlook, and validators can view, correct, or approve extracted content in just a few clicks.

Screenshot of the Document Processor agent in Copilot Studio, using Dataverse

Unlike traditional document automation—where each solution generally starts from scratch—this agent comes prebuilt and requires minimal setup. Using prompts with multimodal input, there is no model training required and no need to build separate validation apps or flows. Going from install to action can take minutes, not hours.

To learn more about the Document Processor agent, watch the Microsoft Build 2025 on-demand session: Agents in action: Document processing 2.0.

The other new templates help automate other mission-critical tasks:

  • The Customer Brief Agent pulls from your business data to generate timely, relevant executive briefs before client meetings.
  • The Lead Manager Agent acts as a top-of-funnel assistant, autonomously processing and responding to inbound leads.

All three agents are designed to help you get started quickly and scale faster. Learn more at Microsoft Build 2025: Build autonomous agents in Copilot Studio.


All these Dataverse updates represent a major leap forward in how makers and developers can build, scale, and manage intelligent agents with Dataverse at the core. From deeper integrations to smarter tools and more powerful out-of-the-box agent templates, Copilot Studio is the go-to platform for business-ready AI.

To dive deeper into these new capabilities, join us for some of our relevant sessions at Microsoft Build 2025

The post Announcing new Microsoft Dataverse capabilities for multi-agent operations appeared first on Microsoft Copilot Blog.

]]>