Data analyst​ - Microsoft Dynamics 365 Blog http://approjects.co.za/?big=en-us/dynamics-365/blog/job-role/data-analyst/ The future of agentic CRM and ERP Wed, 01 Apr 2026 22:07:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 http://approjects.co.za/?big=en-us/dynamics-365/blog/wp-content/uploads/2018/08/cropped-cropped-microsoft_logo_element.png Data analyst​ - Microsoft Dynamics 365 Blog http://approjects.co.za/?big=en-us/dynamics-365/blog/job-role/data-analyst/ 32 32 .cloudblogs .cta-box>.link { font-size: 15px; font-weight: 600; display: inline-block; background: #008272; line-height: 1; text-transform: none; padding: 15px 20px; text-decoration: none; color: white; } .cloudblogs img { height: auto; } .cloudblogs img.alignright { float:right; } .cloudblogs img.alignleft { float:right; } .cloudblogs figcaption { padding: 9px; color: #737373; text-align: left; font-size: 13px; font-size: 1.3rem; } .cloudblogs .cta-box.-center { text-align: center; } .cloudblogs .cta-box.-left { padding: 20px 0; } .cloudblogs .cta-box.-right { padding: 20px 0; text-align:right; } .cloudblogs .cta-box { margin-top: 20px; margin-bottom: 20px; padding: 20px; } .cloudblogs .cta-box.-image { position:relative; } .cloudblogs .cta-box.-image>.link { position: absolute; top: auto; left: 50%; -webkit-transform: translate(-50%,0); transform: translate(-50%,0); bottom: 0; } .cloudblogs table { width: 100%; } .cloudblogs table tr { border-bottom: 1px solid #eee; padding: 8px 0; } ]]> Take the Guesswork Out of Project Quoting with What-if Analysis in Dynamics 365 Project Operations  http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/04/02/what-if-analysis-dynamics-365-project-operations/ http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/04/02/what-if-analysis-dynamics-365-project-operations/#respond Thu, 02 Apr 2026 15:30:00 +0000 http://approjects.co.za/?big=en-us/dynamics-365/blog/?p=201770 Make smarter, faster, and more confident quote decisions—right where you work.  Project quoting has always required a careful balance—aligning profitability with competitiveness, staffing strategies with delivery costs, and customer expectations with business outcomes.  But evaluating these trade-offs hasn’t always been easy. It often means jumping between tools, manually recalculating numbers, and relying on assumptions to guide critical decisions.

The post Take the Guesswork Out of Project Quoting with What-if Analysis in Dynamics 365 Project Operations  appeared first on Microsoft Dynamics 365 Blog.

]]>

Make smarter, faster, and more confident quote decisions—right where you work. 

Project quoting has always required a careful balance—aligning profitability with competitiveness, staffing strategies with delivery costs, and customer expectations with business outcomes. 

But evaluating these trade-offs hasn’t always been easy. It often means jumping between tools, manually recalculating numbers, and relying on assumptions to guide critical decisions. 

That’s where What-if Analysis (Preview) in Dynamics 365 Project Operations comes in. 

This new capability brings real-time simulation directly into your quoting workflow—so you can explore options, compare outcomes, and make decisions with clarity before finalizing a quote. 

What Is What-if Analysis? 

What-if Analysis introduces a dedicated simulation workspace within a project quote, allowing you to model changes to quantities and pricing and instantly see their financial impact. 

Instead of working through “what if” scenarios offline, you can now: 

  • Explore multiple approaches within the quote 
  • Compare their outcomes side by side 
  • Apply the most effective scenario when you’re ready 

All without modifying the actual quote until you choose to. 

It’s a more intuitive, controlled way to move from estimation to decision-making. 

Turn Everyday Questions into Clear Answers 

Every project quote involves key decisions: 

  • Should work shift to a lower-cost delivery center? 
  • What happens if billing rates increase for specific roles? 
  • Can you stay competitive while protecting margin? 

With What-if Analysis, these are no longer hypothetical questions. 

As you adjust quantities and pricing, the system instantly recalculates key financial metrics—including revenue, cost, gross margin, and budget variance—so you can clearly see the impact of every change. 

This real-time feedback helps you move quickly from exploration to confident, data-backed decisions. 

How It Works 

Getting started is simple. From the What-if Analysis tab on a Draft quote, you can create a scenario based on the quote’s existing data. Each scenario is isolated, allowing you to experiment freely without affecting the live quote. 

Within the simulation workspace, you can adjust quantities and pricing across dimensions such as resourcing unit, role, or any custom pricing dimension configured in your environment. Whether you’re making high-level adjustments or refining details at the quote line level, the experience is designed to be flexible and intuitive. 

You can create multiple scenarios—each representing a different approach—and compare them side by side. Built-in comparison views highlight differences in financial outcomes, making trade-offs easier to evaluate. 

When you’ve identified the best approach, applying the scenario updates the Draft quote in place—so you can move forward with confidence, without creating a new revision. 

What This Means for You 

What-if Analysis transforms how you approach project quoting—bringing clarity, speed, and confidence into every decision. 

  • Make decisions with confidence: Instantly understand how pricing and staffing changes impact revenue, cost, and margin—before committing to a quote 
  • Optimize for both competitiveness and profitability: Evaluate trade-offs in real time and choose the approach that best aligns with your goals 
  • Reduce reliance on spreadsheets and manual iteration: Keep simulation and decision-making within Project Operations 
  • Drive faster, more aligned conversations: Use data-backed scenarios to align stakeholders and move decisions forward 

Instead of relying on assumptions, your team can now explore possibilities, evaluate outcomes, and finalize quotes with confidence—knowing the numbers support the decision. 

Availability and Prerequisites 

What-if Analysis is currently available as a preview feature in: 

  • Project Operations Core (Lite deployment) 
  • Project Operations integrated with ERP 

To get started, enable the What-if Analysis feature flag in your environment. The What-if Analysis tab will then be available on qualifying Draft quotes. 

A few things to keep in mind: 

  • Scenarios can only be created on quotes in Draft status that contain estimates 
  • Activated or closed quotes are not eligible 
  • If the underlying quote changes, scenarios will need to be recreated 

As with all preview features, we recommend evaluating this capability in a non-production environment. 

The Bottom Line 

Every project quote is a critical business decision. What-if Analysis gives you the tools to approach that decision with clarity—replacing guesswork with real-time insight and manual effort with seamless simulation. 

The result is not just better quotes, but better decisions—ones that are competitive, financially sound, and aligned with your business goals. 

Get Started 

Enable What-if Analysis in your environment today and start turning “what if?” into “we know.” 

Learn More 

We are making constant enhancements to our features. To learn more about What-Analysis in Project Quotations, visit Quote What-if Analysis 

The post Take the Guesswork Out of Project Quoting with What-if Analysis in Dynamics 365 Project Operations  appeared first on Microsoft Dynamics 365 Blog.

]]>
http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/04/02/what-if-analysis-dynamics-365-project-operations/feed/ 0
Meet the Contact Center Champions Driving the Future of Customer Experience http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/04/01/dynamics-365-contact-center-champions/ http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/04/01/dynamics-365-contact-center-champions/#respond Wed, 01 Apr 2026 19:20:26 +0000 Meet the practitioners shaping how Dynamics 365 Contact Center is adopted, scaled, and improved in real‑world environments.

The post Meet the Contact Center Champions Driving the Future of Customer Experience appeared first on Microsoft Dynamics 365 Blog.

]]>

Behind every modern contact center transformation is a group of passionate practitioners: people who don’t just adopt technology but shape how it’s used, challenged, and improved.

The Contact Center Champions Community brings together these practitioners from around the world. As customer advocates, they are deeply hands‑on with Dynamics 365 Contact Center. They actively influence product direction and share real‑world insights with peers and Microsoft engineering teams.

Below, we’re spotlighting those champions whose journeys reflect diversity, ambition, and impact. Bookmark this page to stay updated on new champion stories.

Please visit Customer Success Stories | Microsoft for more curated organizational stories.


Sachin Patel
Sachin Patel
Head of IT Operations,
Sage Homes

As Head of IT Operations at Sage Homes, Sachin Patel leads the organization’s end‑to‑end journey with Microsoft Dynamics 365, supporting a rapidly growing social housing portfolio of nearly 22,000 properties across England. As Sage Homes brought tenant services fully in‑house, the contact center became a critical hub—requiring a platform that could handle high volumes, protect sensitive interactions, and give agents immediate context across customers and properties. Dynamics 365 Contact Center provided the foundation to unify voice, case management, and data into a single operational experience.

Rather than rushing adoption, Sachin’s team focused on building trust in the system, simplifying routing, designing experience‑based skill handling for agents, and reducing friction through a true 360‑degree view of tenants and properties. With Copilot‑powered summaries now embedded into daily workflows, agents can quickly understand long and complex interaction histories. Meanwhile, AI‑assisted chat handles high‑volume inquiries, reducing escalations to human agents by around 30%. Strong governance, reporting, and access controls ensure the platform scales responsibly as Sage Homes expands its use of AI and digital channels. Sachin’s approach reflects what it means to be a Contact Center Champion. He leads with pragmatic adoption, measurable outcomes, and AI deployed only where it delivers real operational value.

We recently built a new contact center and we’re expecting to get 10,000 virtual customers overnight. We could not have met that demand without the omnichannel capabilities we have in Dynamics 365 Contact Center.

Read the Sage Homes Story

Loren Corrradini
Lorenz Corradini
Head of Center of Competence for Low Code/No-Code (Coc LCNC)
SIAG – Südtiroler Informatik Ag – Informatica Alto Adige Spa

Lorenz Corradini is leading a major transformation in how public services are delivered in South Tyrol, Italy. As the in‑house IT provider for regional public administration, SIAG supports more than 350 services across 23 domains. It serves citizens through healthcare, education, housing, and digital administration.

Facing over 40 siloed legacy systems and a looming workforce shortage, the team at SIAG adopted Microsoft Dynamics 365 Contact Center to create a unified, data‑driven citizen engagement platform. Early results include 30% AI‑assisted resolution within weeks, faster service delivery, and rapid development of new digital services using Power Platform. Lorenz and team are laying the foundation for a more accessible, multilingual, and scalable model of public service delivery.

We used to lose valuable citizen data across dozens of disconnected systems. That’s over. With Dynamics 365 Contact Center and Power Platform, every interaction is captured, connected, and actionable. The citizen is finally at the center, and we use that data to get better every single day.

Read the SIAG Story

Kamal Pandey
Kamal Pandey
Lead Develop, Dynamics CRM
Sandvik Coromant

Kamal Pandey plays a key role in scaling a global, B2B contact center supporting manufacturing customers across industries such as automotive, aerospace, mining, and heavy engineering. Based in Sweden, Kamal leads CRM and contact center development for an organization with 3,000+ Dynamics users. In addition, it has four global customer service hubs spanning Europe, the Americas, India, and China.

Sandvik’s contact center runs fully on Dynamics 365 Customer Service and Dynamics 365 Contact Center, handling chat, voice, and email in a multilingual environment. The majority of capabilities are delivered through out‑of‑the‑box configurations. Kamal’s team is actively adopting AI‑driven features—such as quality evaluation agents for scalable coaching. It also takes a thoughtful, trust‑first approach to Copilot adoption. His focus on maintainability, scale, and agent experience shapes Contact Center adoption in complex enterprise B2B environments.

At a global scale, customer operations demand systems that stay reliable under pressure, and complexity is the default. With Dynamics 365 and the Power Platform, we’ve created a sustainable architecture that supports thousands of users while using AI to enhance, never replace the human touch. For me, great customer experience starts with solid architecture and ends with people empowered to do their best work.”

Read the Sandvik Coromant Story
Watch the Sandvik Coromant demo at Microsoft Ignite.

Rosa Lohman
Rosa Lohman
Business Analyst
GVB

Rosa Lohman supports customer service operations for Amsterdam’s public transport network, including trams, buses, metros, and ferries across the city. Working with a lean team, Rosa oversees how Microsoft Dynamics 365 Customer Service and Contact Center are used to manage voice, email, and web‑based inquiries for approximately 30 customer service agents.

Her team embraced Copilot‑powered call summaries and transcriptions to reduce manual effort and improve efficiency. They are exploring AI‑driven knowledge and case management agents to further optimize service delivery. With a strong focus on insight‑driven improvements, such as identifying automation opportunities for low‑value cases and evaluating digital channels like WhatsApp, Rosa brings a practical, user‑centered perspective to modernizing customer service in the public transportation sector.

Read the GVB Story

The post Meet the Contact Center Champions Driving the Future of Customer Experience appeared first on Microsoft Dynamics 365 Blog.

]]>
http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/04/01/dynamics-365-contact-center-champions/feed/ 0
Introducing Service Agent in Microsoft 365 Copilot http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/03/31/service-agent-microsoft-365-copilot/ http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/03/31/service-agent-microsoft-365-copilot/#respond Tue, 31 Mar 2026 19:30:00 +0000 Service Agent brings Dynamics 365 Customer Service context, insights, and actions directly into Microsoft 365 Copilot.

The post Introducing Service Agent in Microsoft 365 Copilot appeared first on Microsoft Dynamics 365 Blog.

]]>

A new way to bring service workflows, insights, and actions directly into Copilot

Microsoft 365 Copilot is becoming the primary interface for how people get work done. As more teams rely on Microsoft 365 Copilot to retrieve information, reason over data, and take action, the need for domainspecific intelligence, especially for customer service, has never been greater.

On March 9th, we announced the frontier transformation, where we introduced a new type of business application integrated with Microsoft 365 Copilot. Today, we’re excited to introduce Service Agent in Microsoft 365 Copilot: a purpose‑built agent that brings customer service context, insights, and actions directly into the Copilot experience employees already use every day.

Service Agent enables service teams to move faster, stay focused, and resolve issues with greater confidence, without switching tools or losing context.

What is Service Agent?

Service Agent is a declarative agent that runs inside Microsoft 365 Copilot, designed specifically for customer service scenarios.

It combines:

  • The reach and familiarity of Microsoft 365 Copilot
  • The depth of Dynamics 365 Customer Service data
  • The power of agents that can reason, retrieve, and take action

With Service Agent, service professionals can interact with cases, knowledge, and service workflows using natural language—grounded in both Microsoft 365 and Dynamics 365 Customer Service system data—right from Copilot.

Why this matters for IT and service leaders

For years, service professionals have had to juggle multiple tools: CRM systems, knowledge bases, emails, internal chats, and reports, often switching context.

Service Agent changes that model by making Copilot the primary system of engagement for service work.

This approach delivers three key benefits:

1. One Copilot experience, across all applications including within Dynamics 365 Customer Service

Service Agent brings service workflows into the same Copilot surface used for everyday productivity, reducing friction, training overhead, and context switching.

2. Faster resolution through richer context

By grounding Copilot in both Microsoft 365 data (Outlook, Teams, SharePoint) and Dynamics 365 service data (such as cases, emails, knowledge, customer history), service professionals can build case understanding in seconds—not minutes.

3. Action, not just answers

Service Agent doesn’t stop at reading and synthesizing data. It can help service professionals prioritize cases, update records, draft responses to customers, and trigger workflows—all through natural language.

What Service Agent can do in Public Preview

In its initial release, Service Agent enables scenarios such as:

  • Case understanding and summarization
    Quickly generate rich summaries of customer cases, including context from prior interactions and related knowledge.
  • Case prioritization and workload awareness
    Ask Copilot what needs attention now, based on customer signals and service data.
  • Service knowledge retrieval
    Get relevant answers grounded in Dataverse and SharePoint knowledge, directly within Copilot.
  • Make data updates and initiate workflows
    Make updates to service records, add case notes and initiate workflows such as child case creation without leaving Copilot.
  • Crossapp continuity, shared history, shared memory
    Move seamlessly between applications such as Teams, Outlook, and Dynamics 365 Customer Service while maintaining shared memory and chat history.

Figure 1: Getting answers from Dataverse and SharePoint in Copilot Service Workspace

Figure 2: Customer interactions summaries across Dataverse, Teams and Outlook in Microsoft 365 Copilot app

Built for enterprise requirements

Service Agent is designed with enterprise IT needs in mind:

  • Grounded in Microsoft 365 Copilot with enterprise‑grade security and compliance
  • Aligned with existing Dynamics 365 Customer Service investments
  • Extensible, Service Agent will extend to supporting additional skills, apps, and workflows over time
  • Admin‑friendly, building on familiar Copilot and Dynamics management models

Service Agent acts as an intelligent layer on top of existing service systems, bringing the right information and actions to users, when and where they need them across the application ecosystem.

Getting started

Service Agent is now available in public preview, with several ongoing enhancements planned as we expand capabilities, performance, and extensibility.

To learn more:

Looking ahead

Service Agent is a significant step toward a future where Copilot is the primary way people engage with business systems—not just to ask questions, but to get work done.

We’re excited to partner with customers and IT leaders as we continue to evolve Service Agent and bring more service capabilities into Microsoft 365 Copilot.

Stay tuned for more.

The post Introducing Service Agent in Microsoft 365 Copilot appeared first on Microsoft Dynamics 365 Blog.

]]>
http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/03/31/service-agent-microsoft-365-copilot/feed/ 0
Support Parallel Processing for Archive Jobs in Dynamics 365 Finance and Operations http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/03/16/parallel-processing-archive-jobs/ http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/03/16/parallel-processing-archive-jobs/#respond Mon, 16 Mar 2026 19:09:35 +0000 http://approjects.co.za/?big=en-us/dynamics-365/blog/?p=201407 We’re pleased to introduce a new capability for Dynamics 365 Finance and Operations archive with Dataverse long-term retention: parallel processing for archive jobs. This enhancement allows the Archive job scheduler to run multiple archive jobs at the same time, dramatically reducing the time required to archive high volumes of transaction data across legal entities.

The post Support Parallel Processing for Archive Jobs in Dynamics 365 Finance and Operations appeared first on Microsoft Dynamics 365 Blog.

]]>

We’re pleased to introduce a new capability for Dynamics 365 Finance and Operations archive with Dataverse long-term retention: parallel processing for archive jobs. This enhancement allows the Archive job scheduler to run multiple archive jobs at the same time, dramatically reducing the time required to archive high volumes of transaction data across legal entities. 

The challenge: Sequential bottlenecks 

Previously, archive jobs within the same scenario were processed sequentially. For organizations operating across dozens of legal entities—each with millions of transaction records in General Ledger, Sales Orders, or other scenarios—this approach created a bottleneck, and archiving could take days or even weeks to complete. 

For example, a multinational organization may have 50–200+ legal entities, each containing one fiscal year of General Ledger transactions. Archiving one legal entity data at a time delays storage optimization, increases SQL Server load, and slows the movement of data into long-term retention in Dataverse. 

The solution: Use Job criteria key partition to enable parallel processing 

Parallel processing introduces the Job Criteria Key—a partition identifier you set when you build the archive job contract. The job criteria key tells the archive job scheduler which archive jobs operate on independent data sets. This allows them to run simultaneously without conflict. 

How it works 

  1. Define the partition key — When you build the archive job contract, set the criteria key (typically the legal entity) that represents the data partition. 
  1. The scheduler identifies parallel candidates — The archive job scheduler detects that jobs with different criteria keys target non-overlapping records. 
  1. Jobs run concurrently — Rather than waiting in a queue, archive jobs for different partitions execute in parallel. 

The zero-overlap guarantee 

The job criteria key depends on one critical invariant: when multiple archive jobs run within the same scenario, each job with a different job criteria key must process a completely distinct set of records with zero overlap. 

For example, if you run two Sales Order archive jobs at the same time—one with job criteria key “USMF” and another with “DEMF”—the records archived by the USMF job must not overlap with those archived by the DEMF job. This is why DataAreaId is a natural choice for many scenarios: it inherently partitions data by legal entity. 

Monitoring parallel jobs 

You can monitor archive jobs running in parallel from the Archive with Dataverse long term retention workspace in Dynamics 365 Finance and Operations. Each job shows its criteria key value, making it easy to confirm which partitions are being processed concurrently. 

Join the private preview 

Parallel processing for archive jobs is currently available in private preview. If you’d like to try this capability in your environment, we’d be happy to have you participate. 

Submit your request to join the private preview 

By joining the preview, you’ll get early access to parallel archive job execution. You’ll also have an opportunity to provide feedback that helps shape the final release. The preview is open to all Dynamics 365 Finance and Operations customers and partners.

The post Support Parallel Processing for Archive Jobs in Dynamics 365 Finance and Operations appeared first on Microsoft Dynamics 365 Blog.

]]>
http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/03/16/parallel-processing-archive-jobs/feed/ 0
Building Smarter Observability for Agentic ERP World using Dynamics 365  http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/02/27/building-smarter-observability-agentic-erp-dynamics-365/ Fri, 27 Feb 2026 16:19:17 +0000 As enterprise workloads become more agentic, the expectations of ERP systems—and the teams that operate them—are shifting. Batch jobs, workflow orchestration, data import/exports, and background processes are no longer “just” technical plumbing–they are critical pieces of the operational fabric. They deliver timely financial results, accurate supply chain data, and reliable business intelligence driving process optimization.  To support this shift, observability needs to evolve beyond simple logs and reactive troubleshooting.

The post Building Smarter Observability for Agentic ERP World using Dynamics 365  appeared first on Microsoft Dynamics 365 Blog.

]]>

As enterprise workloads become more agentic, the expectations of ERP systems—and the teams that operate them—are shifting. Batch jobs, workflow orchestration, data import/exports, and background processes are no longer “just” technical plumbing–they are critical pieces of the operational fabric. They deliver timely financial results, accurate supply chain data, and reliable business intelligence driving process optimization. 

To support this shift, observability needs to evolve beyond simple logs and reactive troubleshooting. Observability needs to provide meaningful insights into execution behavior, performance patterns, and operational context. This ensures IT teams can run ERP with confidence and reliability. 

In Dynamics 365 ERP apps, we’ve long provided integration with Azure Application Insights to help organizations collect telemetry about user activity, failures, and application behavior. Now, with the expansion of batch telemetry signals — including start/stop events, failure data, throttling conditions, thread availability, and queue behavior — administrators and IT architects can gain deeper visibility into the health of critical batch-based workloads.  

Why Observability Matters Now 

ERP observability historically focused on basic monitoring. It observed which jobs were running, whether a job failed, or whether alerts were triggered. These indicators are useful, but they lack operational context. Modern enterprise workloads are increasingly interconnected, and automation driven. Delays or failures in one workload can ripple outward, affecting downstream processes, reporting accuracy, and service delivery. 

At the same time, teams are beginning to rely on AI agents to help monitor, diagnose, and in some cases suggest remediation steps. These tools need high-quality signals to be effective. 

Batch workloads are a prime example. Batch jobs directly impact business outcomes, from overnight posting to inventory sync and settlements.
Without execution insights, teams guess root causes and waste time on manual investigation.

What Batch Telemetry Brings to the Table 

The monitoring and telemetry capabilities in Dynamics 365 ERP enable customers to send application telemetry to Azure Application Insights for analysis and alerting. The recent expansion of telemetry signals for batch workloads builds on this foundation by adding behavioral data specifically for batch execution patterns. 

These signals include: 

  • Batch start and stop events to show how long jobs take to run, not just whether they completed. 
  • Failure information that correlates with info log entries and execution context. 
  • Throttling indicators that highlight contention due to system load. 
  • Thread availability data that helps reveal when jobs are waiting because capacity is constrained. 
  • Queue depth metrics shows number of waiting tasks for all queues that are part of the Priority Based Scheduling queues.  

Emitting these signals into a customer-owned Application Insights resource means teams can apply their existing monitoring pipelines, dashboards, and alerting logic without changing how data is consumed. 

From Visibility to Insight 

Once batch telemetry data flows into Application Insights, teams can query it using Kusto Query Language (KQL) and build dashboards that correlate workload behavior with other operational metrics.  

This richer observability enables several practical outcomes: 

  • Faster investigation of execution behavior without sifting through logs. 
  • Trend analysis to detect regressions or capacity bottlenecks before they impact business cycles. 
  • More informed capacity planning based on actual observed patterns. 
  • Alignment of SLA expectations with real operational performance. 

Here are some real‑world business scenarios that show how telemetry insights are helping customers troubleshoot issues and resolve problems faster. 

A global consumer goods company frequently sees high priority jobs completing late. Batch Queue telemetry exposes queue congestion and thread exhaustion, showing when noncritical tasks bury priority workloads.

It helps surface when priority-based scheduling queues build up and delay time‑sensitive workloads, while also revealing misconfigured priorities that cause jobs to be processed out of order. It further enables teams to closely monitor queue health during cutover or high‑load events, ensuring critical workloads flow smoothly. 

Similarly, a finance team’s bank reconciliation jobs remain “Waiting” for long periods. Thread telemetry reveals thread starvation—jobs were queued, but threads were fully consumed. 

It helps explain why jobs remain stuck in a “Waiting” state by revealing when thread capacity is fully consumed by parallel workloads. It also highlights thread saturation patterns, enabling teams to right‑size AOS batch capacity for smoother, more predictable processing. 

A Foundation for Intelligent Operations 

The expanded telemetry signals are not just a diagnostic tool. They serve as a foundation for smarter operations in an era where agents play an increasing role. High-fidelity Batch telemetry enables experiences like: 

  • Automated detection of anomalies based on execution baselines. 
  • Correlation of workload performance with business-critical thresholds. 
  • Enhanced alerts that tie operational conditions to business impact. 

By making execution behavior more observable and actionable, Dynamics 365 ERP helps teams focus on outcomes, not just symptoms. 

Getting Started 

If you haven’t already configured monitoring and telemetry for your environment, the first step is to integrate your Dynamics 365 ERP instance with Azure Application Insights – refer. Monitoring and telemetry overview – Finance & Operations | Dynamics 365 | Microsoft Learn .  

Once telemetry is configured, expanded batch signals can be toggled on from within system administration and begin flowing to your Application Insights pipeline for analysis.  

Rich observability is a core requirement for running modern ERP workloads, especially as organizations adopt more automation and begin exploring agent-assisted operational tooling. By bringing deeper insight into batch execution behavior, our ERP portfolio apps in Dynamics 365 helps IT teams move from reactive troubleshooting toward proactive reliability and informed decision-making.  

For more details visit Available telemetry – Finance & Operations | Dynamics 365 | Microsoft Learn

The post Building Smarter Observability for Agentic ERP World using Dynamics 365  appeared first on Microsoft Dynamics 365 Blog.

]]>
Evaluating AI Agents in Contact Centers: Introducing the Multi-modal Agents Score  http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/02/04/multimodal-agent-score/ Wed, 04 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/dynamics-365/blog/?p=200008 Introducing the Multimodal Agent Score (MAS)—a unified, absolute measure of end‑to‑end conversational quality designed for AI agents operating across modalities. MAS is grounded in a simple observation: every service interaction, whether handled by a human or an AI agent, progresses through three fundamental stages. First, the agent must understand the input, accurately interpreting content, intent, and contextual signals. Next, it must reason over that input, determining the correct actions, maintaining conversational continuity, and resolving ambiguity responsibly. Finally, the agent must respond effectively, delivering clear, natural, and confident communication in the appropriate tone and format.

The post Evaluating AI Agents in Contact Centers: Introducing the Multi-modal Agents Score  appeared first on Microsoft Dynamics 365 Blog.

]]>

As self-service becomes the first stop in contact centers, AI agents now define the frontline customer experience. Modern customer interactions span voice, text, and visual channels, where meaning is shaped not only by what is said, but by how it’s said, when it’s said, and the context surrounding it.   

In customer service, this is even more pronounced-customers reaching out for support don’t just convey information. They convey intent, sentiment, urgency, and emotion, often simultaneously across modalities; a pause or interruption on a voice call signals frustration,  blurred document image leads to downstream reasoning failures, and flat or fragmented response erodes trust-even if the answer is correct In our previous blog post, we reflected on the evolution of contact centers from scripted interactions to AI-driven experiences. As contact center landscape continues to change, the way we evaluate AI agents must change with them. Traditional approaches fall short by focusing on isolated metrics or single modalities, rather than the end-to-end customer experience. 

Contact centers struggle to reliably assess whether their AI agents are improving over time or across architectures, channels, and deployments. While cloud services rely on absolute measures like availability, reliability and latency, AI agent evaluation today remains fragmented, relative, and modality specific. What would be useful is an absolute, normalized measure of end-to-end conversational quality- one that reflects how customers actually experience interactions and answers the fundamental question: Is this agent good at handling real customer conversations? 

Introducing the Multimodal Agent Score (MAS) 

MAS is built on the observation that every service interaction- whether human-to-human or human-to-agent- naturally progresses through three fundamental stages: (explored in more detail here: Measuring What Matters: Redefining Excellence for AI Agents in the Contact Center )

  1. Understanding the input – accurately capturing and interpreting what the customer is saying, including intent, context, and signals such as urgency or emotion. 
  1. Reasoning over that input – determining the appropriate actions, managing context across turns, and deciding how to resolve the issue responsibly. 
  1. Responding effectively – delivering clear, natural, and confident resolution in the right tone and format. 

Multimodal Agent Score directly mirrors these stages. It is a weighted composite score (0-100) designed to assess end-to-end AI agent quality across modalities- voice, text, and visual- aligned to how real conversations naturally unfold.  

MAS Dimensions and Parameters 

Conversation Stage MAS Quality Dimension What It Measures Example Parameters
Understanding Agent Understanding Quality  how well the agent hears and understands the user (e.g., latency, interruptions, speech recognition accuracy)  Intent-determination, Interruption, missed window 
Reasoning Agent Reasoning Quality how well the agent interprets intent and resolves the user’s request  Intent-resolution, acknowledgement 
Response Agent Response Quality how well the agent responds, including tone, sentiment, and expressiveness   CSAT, Tone stability 

Computing each MAS score:

MAS is computed as a weighted aggregation of three quality dimensions stated in the table above. 

where: 

  • Qj represents one of the three quality dimensions: Agent Understanding Quality (AUQ), Agent Reasoning Quality (ARQ)Agent Response Quality (AReQ)  
  • wj represent the costs or weights of each dimension 
  • αj captures the a priori probability of the respective dimension  

Computing each MAS dimension: 

Computing each MAS dimension (AUQ, ARQ, AReQ) involves aggregating underlying parameters into a single weighted score. Raw measurements (such as interruption, intent determination, or tone stability) are first normalized into a 0–1 score before aggregating them at the dimension level. We apply a linear normalization function clipping each raw measurement at predefined thresholds suitable for the parameter being measured (for example, maximum allowed interruption or minimum required accuracy). This maintains the sensitivity of each parameter in the relevant effective range and avoids the negative impact of measurement outliers, making MAS an absolute measure of agent quality. 

MAS in Practice: Voice Agent Evaluation Example 

To ground MAS in real-world conditions, we evaluated ~2,000 synthetic voice conversations across two agent configurations using identical prompts and scenarios: 

  • Agent-1: Chained voice agent using a three-stage ASR–LLM–TTS pipeline 
  • Agent-2: Real-time voice agent using direct speech-to-speech architecture  

The evaluation dataset included noise, interruptions, accessibility effects, and vocal variability to simulate production environments.  

Shown below is a comparison of core MAS metrics, including dimension-level scores and the overall MAS score. 

Voice Evaluation Results (Excerpt) 

Dimension Parameters  Agent-1 Agent-2 
AUQ Interruption Rate (%) 0.045 0.025 
AUQ Missed Response Windows 0.00045 0.0015 
ARQ Intent Resolution 0.13 0.08 
ARQ Acknowledgement Quality 0.08 0.10 
AReQ CSAT 0.128 0.126 
AReQ Tone stability 0.16 0.14 

Key Observations  

MAS provides flexibility to surface quality insights at an aggregate level, while enabling deeper analysis at the individual parameter level. To better understand performance outliers and anomalous behaviors, we went beyond composite scores and analyzed agent quality at the individual parameter level. This deeper inspection allowed us to attribute observed degradations to specific factors: Example: 

  1. Channel quality matters: Communication channels introduce multiple challenge such as latency, interruptions, compression and loss of information, penalizing recognition and response quality. 
  1. Turn-taking quality is critical: Missed windows and interruptions strongly correlate with abandonment. 
  1. Tone and coherence matter: Cleaner audio and uninterrupted responses lead to higher acknowledgement and perceived empathy. 
  1. MAS reveals root causes: Differences in scores clearly distinguish understandingreasoning, and response failures-something single metrics cannot do. 

Looking Forward 

We will continue to refine and evolve MAS as we validate it against real-world deployments and business outcomes. As the Dynamics 365 Contact Center team, we aim to establish MAS as our quality benchmark for evaluating AI agents across channels. Over time, we also intend to make MAS broadly available, extensible, and pluggable, enabling organizations to adapt it, to evaluate their contact center agents across modalities. For readers interested in the underlying methodology and mathematical foundations, a detailed research paper will be published separately. 

The post Evaluating AI Agents in Contact Centers: Introducing the Multi-modal Agents Score  appeared first on Microsoft Dynamics 365 Blog.

]]>
Measuring What Matters: Redefining Excellence for AI Agents in the Contact Center  http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/02/04/ai-agent-performance-measurement/ Wed, 04 Feb 2026 17:00:00 +0000 The contact center industry is at an inflection point. AI agent performance measurement is becoming essential as contact centers shift toward autonomous resolution. Gartner predicts that by 2029, AI agents will autonomously resolve 80% of common customer service issues.

The post Measuring What Matters: Redefining Excellence for AI Agents in the Contact Center  appeared first on Microsoft Dynamics 365 Blog.

]]>

The contact center industry is at an inflection point. AI agent performance measurement is becoming essential as contact centers shift toward autonomous resolution. Gartner predicts that by 2029, AI agents will autonomously resolve 80% of common customer service issues. Yet, despite massive investment in conversational AI, most organizations lack a coherent way to measure whether their AI agents are good. Traditional metrics like AHT, CSAT, and others are important to track business results. However, they are trailing signals and don’t tell you whether an AI agent is competent, reliable, or most importantly improving

This isn’t just a technical problem. It’s a business problem. Without rigorous measurement, companies can’t improve their agents, can’t demonstrate ROI, and can’t confidently deploy AI to handle their most valuable customer interactions. 

What Makes a Great Customer Service Agent? 

In 2017, Harvard Business Review published research that challenged everything the industry believed about customer service excellence. The study, based on data from over 1,400 service representatives and 100,000 customers worldwide, revealed a truth which goes against many support manuals. Customers don’t want to be pampered during support interactions. They just want their problems solved with minimal effort and maximum speed. This research also highlights why strong AI agent performance measurement is required to benchmark these behavioral models.

The research team identified seven distinct personality profiles among customer service representatives. Two profiles stand out as particularly instructive for understanding AI agent design: 

Empathizers are agents most managers would prefer to hire. They are natural listeners who prioritize emotional connection. They validate customer feelings, express genuine concern, and focus on making customers feel heard. When a frustrated customer calls about a billing error, an Empathizer responds with warmth: “I completely understand how frustrating that must be. Let me look into this for you and make sure we get it sorted out.” Empathizers excel at building rapport and defusing tension. Managers love them, 42% of surveyed managers said they’d preferentially hire this profile. 

Controllers take a fundamentally different approach. They’re direct, confident problem-solvers who take charge of interactions. Rather than asking customers what they’d like to do, Controllers tell them what they should do. When that same frustrated customer calls about a billing error, a Controller responds differently. “I see the problem. There’s a duplicate charge from October 15th. I’m removing it now and crediting your account. You’ll see the adjustment within 24 hours. Is there anything else I can help you fix today? ” Controllers are decisive, prescriptive, and focused on the fastest path to resolution. 

Here’s what the HBR research revealed: Controllers dramatically outperform Empathizers on virtually every quality metric that matters: customer satisfaction, first-contact resolution, and especially customer effort scores. Yet only 2% of managers said they’d preferentially hire Controllers. This does not eliminate the need for empathetic agents but clarifies that empathy is necessary but not enough. 

This insight becomes even more important when we consider the context of modern customer service. Nearly a decade of investment in self-service technology means that by the time a customer reaches a human or an AI agent, they’ve already tried to solve the problem themselves. They’ve searched for the FAQ, attempted the chatbot, maybe even watched a YouTube tutorial. They’re not calling because they want to chat. They’re calling because they’re stuck, frustrated, and need someone to take charge and fix their problem. 

The HBR research quantified this: 96% of customers who have low-effort service experience intend to re-purchase from that company, directly translating into higher retention and recurring revenue. For high-effort experiences, that number drops to just 9%. Customer effort is four times more predictive of disloyalty than customer satisfaction. 

The AI Advantage: Dynamic Persona Adaptation 

Human agents are who they are. An Empathizer can learn Controller techniques, but their natural instincts will always pull toward emotional validation. A Controller can practice active listening, but they’ll always be most comfortable cutting the chase. Training can shift behavior at the margins, but a fundamental personality is remarkably stable. 

AI agents can learn from the best human agents and adapt their style in real time based on conversation context. A well-designed agent can operate in Controller mode for straightforward technical issues- direct and prescriptive-and shift to Empathizer mode when a customer shares difficult news. It adapts mid-conversation based on sentiment, issue complexity, and customer preferences. 

This isn’t about mimicking personality types. It’s about dynamically deploying the right approach for each moment of each interaction. The best AI agents don’t choose between being helpful and being efficient. They recognize that true helpfulness often means being efficient. They adapt their communication style to what each customer needs in each moment. 

But this flexibility adds to the fundamental measurement challenges for both human and AI agents’ evaluation. There is no single “best” conversation. All interactions are highly dynamic with no fixed reference for comparison, and the most important business metrics are trailing and hard to attribute at the conversation or agent level. As a result, no single metric can capture this complexity. We need a framework that evaluates agent capabilities across contexts. 

Defining Excellence: What the Best AI Agents Achieve 

Before introducing a measurement framework, let’s establish benchmarks that framework, let’s establish benchmarks that define world-class performance. 

First-Contact Resolution (FCR) measures whether the customer’s issue was fully resolved without requiring a callback, transfer, or follow-up. Industry average sits around 70-75%.  This matters because FCR correlates directly with customer satisfaction: centers with high FCR see 30% higher satisfaction scores than those struggling with repeat contacts. 

Customer Satisfaction (CSAT) captures how customers feel about their interaction. The industry average, measured via post-call surveys, hovers around 78%. World-class performance means 85% or higher. Top performers in 2025 are pushing toward 90%. 

Response Latency is particularly critical for voice AI. Human conversation has a natural rhythm, roughly 500 milliseconds between when one person stops speaking, and another responds. AI agents that exceed this threshold feel unnatural. Research shows that customers hang up 40% more frequently when voice agents take longer than one second to respond. The target for production voice AI is 800 milliseconds or less, with leading implementations achieving sub-500ms latency. 

Average Handle Time (AHT) varies significantly by industry. Financial services averages 6-8 minutes, healthcare 8-12 minutes, technical support 12-18 minutes. The key insight is that AHT should be minimized without sacrificing resolution quality. Fast and wrong is worse than slow and right, but fast and right is the goal. 

These benchmarks provide targets, but they are trailing signals and don’t tell us how to build agents that achieve them. For that, we need to understand the three pillars of agent quality. 

The Three Pillars: Understand, Reason, Respond 

Every customer interaction, whether with a human or an AI, follows the same fundamental structure. The agent must understand what the customer is saying, reason about how to help, and deliver an effective answer. The key is that any weakness in any pillar undermines the entire interaction. LLM benchmarks are fragmented and do not provide a holistic and focused view into contact center scenarios. 

Pillar One: Understand 

The first challenge is accurately capturing and interpreting customer input. For voice agents, this means speech recognition that works in real-world conditions of background noise, accents, interruptions, domain-specific terminology. For video or images, it means visual understanding that handles varying noise, object occlusion, and context-dependent interpretation. Classic benchmarks are misleading here. Models achieving 95% accuracy on clean test data often fall to 70% or below in production environments with crying babies, barking dogs, and customers calling from their cars. Additionally, interruptions and system latency are key challenges that impact understanding score quality. 

Beyond transcription, understanding requires intent determination. When a customer says, “I’m calling about my order. I think it was delivered to the wrong address,” the agent needs to identify both the topic (order delivery) and the specific issue (wrong address). The measure needs to detect that this is a complaint requiring resolution, not just an informational query. And ideally, it should pick up on emotional cues: frustration, urgency, confusion, all that should influence how it responds. 

Key metrics for this pillar include word error rate for transcription accuracy, intent recognition precision and recall, and latency from when the customer stops speaking to when the agent begins responding. Interruption rates also matter. Agents that talk over customers while they’re still speaking destroy the conversational experience. 

Pillar Two: Reason 

Understanding what the customer said is only the beginning. The agent must then determine the right course of action. This is where “intelligence” in artificial intelligence matters. 

Effective reasoning means connecting customer intent to appropriate actions. If the customer needs their address changed, the agent should access the order management system, verify customer identity, make the change, and confirm success. If the issue is more complex (say, the package was marked delivered but never arrived), the agent needs to pull tracking information, assess whether this looks like miss-delivery, determine whether a replacement or refund is appropriate, and potentially flag the case for investigation. 

This pillar also encompasses multi-turn context management. Customers don’t speak in complete, self-contained utterances. They reference previous statements, use pronouns, and assume the agent is tracking the conversation. “What about my other order?” only makes sense if the agent remembers discussing a first order. “Can you do that for my husband’s account too?” requires understanding what “that” refers to and what permissions are appropriate. 

Perhaps most critically, reasoning quality includes knowing what the agent doesn’t know. A well-designed agent admits uncertainty rather than fabricating answers. This is particularly challenging in the LLM where models are trained to produce answers no matter what. There are two parts to that problem, one the agent should reason and ask for additional data. In truly autonomous agents such interactions should go beyond slot filling or interview. It needs to be dynamic, adaptive, and contextual.  When the agent feels stuck, it should admit that and either ask for help from supervisor or simply escalate. In any case, responsible AI guardrails and validations are key to ensuring proper agent responses and guarded interactions.  

Key metrics include intent resolution rate, task completion rate, context retention across turns, and hallucination frequency. 

Pillar Three: Respond 

The final pillar is delivering the response effectively. Even perfect understanding and flawless reasoning mean nothing if the agent can’t communicate the resolution clearly. 

Answer quality encompasses both content and delivery. The content must be accurate, complete, and actionable. Customers shouldn’t need to ask follow-up questions because the agent omitted critical information. They shouldn’t be confused by jargon or ambiguous phrasing. 

In a multi-channel, multi-modal agent world, AI agents must adapt how they deliver responses based on the channel and context. Effective delivery is about aligning the form, timing, and tone of responses to the interaction at hand. Emotional Quotient matters regardless of modality. When the tone, voice or interaction feels mechanical, even correct content can lose its impact and undermine trust across channels, the objective remains consistent: ensure responses feel natural, clear, and trustworthy from the customer’s perspective. 

The Controller research is relevant here. The best responses are often more direct than traditional customer service training suggests. Instead of “I’d be happy to help you with that. Let me take a look at your account and see what options might be available for addressing this situation,” top performers say “I see the problem. Here’s what I’m doing to fix it.” 

Key metrics include solution accuracy, response completeness, fluency ratings, and post-response customer sentiment. For voice, prosody and expressiveness scores capture delivery quality. 

To build AI agents that customers truly trust, organizations must move beyond fragmented metrics and isolated KPIs. Excellence in customer service is not the result of a single capability. It emerges from how well an agent performs across the three pillars. These pillars form the foundation of modern AI agent performance measurement.

A Composite Score as Unified Measure  

We believe the future of AI agent evaluation lies in a composite approach, the one that brings together these core capabilities into a unified measure of quality.  However, no single metric can tell you whether an AI agent truly works well with real customers. Individual measures tend to over-optimize narrow behaviors while hiding the trade-offs between speed, accuracy, reasoning quality, and customer experience.  
 

A composite score solves this problem by balancing multiple dimensions into one holistic view of agent performance. This approach reveals strengths and weaknesses at the system level rather than through isolated signals. Most importantly, a unified score enables consistent benchmarking and clearer progress tracking. It gives both executives and practitioners a metric they can confidently use to drive improvement. 

We are introducing a contact center evaluation guideline and a set of metrics designed to holistically assess AI agent performance across the dimensions that matter most in real customer interactions. Rather than optimizing isolated signals, this approach evaluates how effectively an agent understands customer intent, reasons through the problem space, and delivers clear, confident, and timely resolutions. 

These guidelines are intended to provide a practical foundation for teams building, deploying, and scaling AI agents in production. They enable consistent measurement, meaningful comparison, and continuous improvement over time.  

This framework is intended to be open and evaluable by anyone. For a deeper dive into the evaluation framework, recommended metrics, and examples of how this can be applied in practice, please refer to the detailed blog: Evaluating AI Agents in Contact Centers: Introducing the Multi-modal Agents Score 

The post Measuring What Matters: Redefining Excellence for AI Agents in the Contact Center  appeared first on Microsoft Dynamics 365 Blog.

]]>
From manual work to meaningful selling: How Agentic AI is transforming Dynamics 365 Sales  http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2026/01/28/agentic-ai-transforming-dynamics-365-sales/ Wed, 28 Jan 2026 15:06:27 +0000 Agentic AI in Dynamics 365 Sales reduces manual CRM work by turning unstructured information into actionable insights, helping sellers capture data faster, explore pipeline trends with natural language, and focus more on meaningful selling.

The post From manual work to meaningful selling: How Agentic AI is transforming Dynamics 365 Sales  appeared first on Microsoft Dynamics 365 Blog.

]]>

Every seller knows how much time gets lost between selling moments. Information arrives in many forms—emails, screenshots, documents, handwritten notes—and turning that into structured CRM data often means manual copying, rework, or skipped fields altogether. At the same time, answering everyday questions like “Which leads should I follow up on?” or “How is my pipeline shaping up right now?” can require complex filters, multiple views, or exporting data just to get a clear answer.

Dynamics 365 Sales is evolving to address these challenges with agentic assistance. Instead of sellers adapting to rigid forms, grids, and filters, agentic AI in Dynamics 365 Sales now adapts to how sellers naturally work—by understanding unstructured inputs, interpreting intent, and assisting directly at the point of action. Two purpose-built agents bring this to life:

  • A Data Entry Agent that uses LLMs to understand pasted content and uploaded files, extract relevant details, and quickly populate CRM forms for faster lead and contact creation.
  • A Data Exploration Agent helps sellers quickly understand trends across opportunities, leads, or accounts by turning natural language questions into filtered views and visual insights.

Together, these agents reduce two of the biggest productivity drains in sales—manual data entry and cumbersome data exploration—so sellers can spend less time managing CRM and more time engaging customers.

Let’s look at how these experiences use agentic AI in Dynamics 365 in real sales scenarios:

Capture sales data faster with the Data Entry Agent
Accurate customer data is critical, but sellers encounter information in many forms—emails, websites, documents, and business cards. The Data Entry Agent uses large language models to understand unstructured text and files, infer intent, and map extracted details to the right CRM fields, without requiring sellers to manually interpret or retype information.

Capture Lead and Contact details instantly with Smart Paste

When a seller receives an inbound email from a prospect, creating a lead often means manually copying names, email addresses, phone numbers, and company details into CRM. For example, a prospect may write:

You want to respond quickly, but first you need to log the lead.

With Smart Paste (Preview), sellers can copy the email content, navigate to the lead or contact form. The system analyzes the copied text, extracts key details such as name, company, email, and phone number, and suggests values inline for the relevant fields. Each suggestion includes inline citation from the email, so sellers can clearly see the source of the information.

Sellers can review AI-generated field suggestions, view citations, accept what looks right, and save—enabling faster lead capture with greater confidence in data accuracy.

Similarly, a seller may be reviewing a prospect’s website or LinkedIn profile in separate tabs. Instead of manually re-entering details later, they can copy text from the company’s About Us page or the prospect’s LinkedIn profile and paste it directly into a CRM form. The agent analyzes the content and suggests values such as industry, company name, location, and job title, allowing the seller to review and apply the information immediately while the context is still fresh.

Convert Physical Documents into CRM Records with Files (Preview)

After trade shows, conferences, or in-person meetings, sellers often return with a stack of business cards or documents from dozens of conversations. Manually transcribing this information delays follow-up and increases the chance of errors.

With Files (Preview), sellers can upload images of business cards or documents such as .txt, .docx, .csv, .pdf, .png, .jpg, .jpeg, or .bmp, directly into the form. The system analyses the uploaded files and suggests values for relevant fields, including names, titles, company details, email addresses, and phone numbers. Sellers simply review and confirm the suggestions, turning what once took hours into minutes.

This enables faster post-event follow-up and more complete lead and contact records.

Find and understand sales data faster with the Data Exploration agent

Finding the right records and understanding trends is essential for sellers, but navigating views and filters can be time-consuming. Powered by natural language understanding, the Data Exploration Agent (Preview) translates seller questions into structured filters, allowing users to interact with CRM data using plain language instead of complex query logic, making it easier to plan, prioritize, and understand pipeline health directly within their views.

Find the right records faster using Natural Language in Views

Filtering records in CRM can be time-consuming, especially when multiple criteria are involved. Imagine planning your day and opening My Open Leads to focus on recent campaign responses. Instead of building complex filters, you simply type: “Leads from the Summer Campaign created last month.”

Or, when preparing for a forecast call, you search: “Opportunities from Technology accounts closing next quarter.”

The system interprets the request and automatically applies the appropriate filters to the view. Sellers can review and modify the filters if needed, giving them both speed and control. This simplifies daily planning, follow-ups, and pipeline reviews.

Understanding trends often requires more than scanning rows of data, but building dashboards or exporting reports isn’t practical for day-to-day sales work. With Visualize (Preview), sellers can turn the filtered data they’re already viewing into interactive charts with a single click—directly within the view and without breaking their flow.

Because the visualization is generated from the current view and visible columns, it automatically reflects the exact filters, segments, and scope the seller is working with. Sellers can hover to see detailed values, drill into specific segments, and switch chart types on the fly as new questions come up. This makes it easy to answer questions like “Where are most of my open opportunities concentrated?”, “Which lead sources are driving volume right now?”, or “How is my pipeline distributed across stages?”

Visualize is designed for quick, in-the-moment understanding, not deep reporting. It complements Power BI by giving sellers immediate visual insight at the point of work—without creating reports, navigating dashboards, or leaving CRM—so they can recognize patterns and act faster while staying in flow.

Enable these agentic capabilities in Power Platform Admin Center

  • To enable Data Entry agent capabilities, go to Power Platform Admin CenterSettingsProductFeatures.
    Under AI form fill assistance, turn On
    • Automatic suggestions
    • Smart paste and file suggestions and
    • Form fill assist toolbar. Changes apply to model-driven apps once saved.
  • To enable Data Exploration agent capabilities, go to Power Platform Admin CenterSettingsProductFeatures.

    • Under Natural language grid and view search, set Enable this feature for to All users immediately
    • Turn On Allow AI to generate chartsto visualize the data in a view and enable AI-generated chart styling for a consistent visual experience.

Focus More on Selling, Less on Administration

With agentic AI in Dynamics 365 Sales, the platform evolves from a system of record into a system that understands, assists, and adapts—helping sellers spend more time selling and less time managing CRM.


The post From manual work to meaningful selling: How Agentic AI is transforming Dynamics 365 Sales  appeared first on Microsoft Dynamics 365 Blog.

]]>
Sales Qualification Agent: How we evaluated and improved AI quality with benchmarks http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2025/12/11/sales-qualification-agent-benchmarks/ Thu, 11 Dec 2025 16:00:00 +0000 Sales Qualification Agent (SQA) is not a simple productivity tool—it is a complex multi-step agent directly influencing revenue outcomes. The Sales Qualification Bench represents a foundational step toward enterprise-grade trust, transparency, and continuous quality improvement for agentic AI in sales.

The post Sales Qualification Agent: How we evaluated and improved AI quality with benchmarks appeared first on Microsoft Dynamics 365 Blog.

]]>

The Sales Qualification Agent (SQA) in Dynamics 365 Sales introduces a new class of autonomous sales AI, one that does far more than assist with drafting or summarization. SQA performs multi-step reasoning, conducts live web research, generates personalized outreach, and engages prospects in multi-turn qualification conversations. These capabilities directly shape pipeline quality, seller productivity, and customer relationships. 

As agentic AI becomes deeply embedded in revenue-critical workflows, trust must be earned through transparent, repeatable, and rigorous evaluation—not anecdotal wins or point demos.

Today, we’re announcing the Microsoft Sales Bench—a collection of evaluation benchmarks designed to assess the performance of AI-powered sales agents across real-world scenarios. Adding to the Sales Research Bench already published as part of this collection to evaluate Sales Research Agent, today we are also publishing the Sales Qualification Bench to evaluate Sales Qualification Agent in Dynamics 365 Sales.

This post presents the detailed evaluation methodology and results for the agent, including a head-to-head comparison against chatGPT using identical data, tasks, and scoring rubrics. These efforts establish the first benchmark purpose-built to measure end-to-end sales agent workflows, from research to outreach to live qualification. 

SQA Architecture  

The Dynamics 365 Sales Qualification Agent (SQA) architecture is designed as an end-to-end, enterprise-grade AI system that autonomously researches leads, synthesizes insights, and generates seller-ready outreach. It combines an intelligence engine powered by large language models with iterative web and enterprise data research, tightly integrated with Dynamics 365 Sales and Microsoft Copilot Studio for orchestration. Built on secure enterprise foundations, the architecture enforces governance, compliance, and data protection while enabling scalable, trustworthy AI-driven sales workflows. 

Evaluation Metrics and Methodology 

To understand how well the Sales Qualification Agent (SQA) performs in real-world sales qualification workflows, we designed the Sales Qualification Bench, a comprehensive evaluation that mirrors how sellers actually research leads, personalize outreach, and engage with prospects. Our goal was straightforward: measure whether SQA can help reps qualify faster, personalize more effectively, and carry higher-quality customer conversations—using the same signals and information they rely on every day. 

To ensure that the evaluations accurately represent real-world conditions, we developed a testbed that closely mirrors the complexity and ambiguity found in contemporary sales environments. This allowed us to evaluate SQA end to end, from autonomous research and reasoning to grounded, actionable research briefs, outreach messages, and multi-turn qualification conversations. 

Evaluation Setup

To ensure real-world fidelity, we constructed a production-like lead evaluation environment that mirrors how SQA operates in Dynamics 365 Sales. 

Lead and Data Corpus 
  • Three synthetic but realistic seller companies (C1) across distinct industries, with unique: 
    • Product offerings 
    • Knowledge sources 
    • Ideal customer profiles 
  • 300+ lead dataset (C2) expanded into a scenario-rich corpus: 
    • Companies across 6 global regions (North America, Europe, Asia, South America, Australia, Africa) 
    • 33 industries 
    • Mixed clarity (well-known brands and long-tail companies) 
    • Structured attributes (name, role, email) 
  • CRM roles represented
    • Sales representatives 
    • Digital specialists 
    • Customer success managers 
    • Each linked to relevant accounts, opportunities, and cases 
  • Company segment coverage
    • Enterprise 
    • Mid-Market 
    • Small Business 
    • Government 
    • Education 
  • 500+ email exchanges simulating real sales interactions: 
    • Technical product questions 
    • Meeting requests 
    • Ambiguous or low-intent inquiries 
Simulated Agent Workflows 

All evaluations reflected real SQA behavior: 

  • Autonomous web-based research 
  • Role-aware outreach generation 
  • Multi-turn qualification conversation handling 
Tasks Evaluated and Evaluation Metrics 
1. Company Research 

For each lead, the agent generates a structured research brief including: 

  • Business overview, strategy and priorities 
  • Financial signals 
  • Recent news relevant to the seller 
Metrics Definition 
Recency Measure of how recent time-sensitive insights are relative to the current date (older insights are not as useful for sellers) 
Relevance & Solution Fit  Measure of how well the insights are tied back to sellers’ offerings (relevant insights are more actionable than a regurgitation of facts) and articulate the lead company’s need or interest in then 
Completeness   Measure of how well the insights capture all the facts that are useful to a seller 
Reliability Measure of how consistently the agent finds useful insights for the seller (e.g., strategic priorities return current strategic priorities and not generic mission statements, news returns news articles and not generic evergreen statements about a company)  
Credibility Measure of how reputable the sources referenced by the agent are  
2. Lead Outreach 

Based on its research, the agent generated a personalized email aligned to: 

  • The lead’s role 
  • The seller’s value proposition 
  • The company’s business context 
  • Value-based positioning 
     
Metric Definition 
Clarity Assesses how clear, precise, and jargon-free the message is, ensuring every sentence adds value. 
Personalization Measures how well the email is tailored to the specific target company, using concrete company-level details rather than generic industry language. 
News-anchored opening Checks whether the email references recent company events or updates, ensuring the outreach feels timely and current. 
Relevance and Solution Fit Measure of how well the insights are tied back to sellers’ offerings/solutions (relevant insights are more actionable than a regurgitation of facts), and articulate the lead company’s need or interest in them
Structure Evaluates whether the email has a clear logical flow from opening hook to problem, solution, and call to action. 
3. Qualification Conversations (Engage) 

The agent then autonomously engages back and forth with the lead, progressively asking them questions for customer-configured qualification criteria such as budget, need, and timeline and answering the lead’s questions such as: 

  • “What does your solution do?” 
  • “How are you priced?” 
  • “How do you compare to competitors?” 
  • “Who else uses this?” 
Metric Definition 
Answer Quality Assesses whether the agent provides clear, relevant, and complete answers that directly address the customer’s intent. 
Agent Comprehension Evaluates how well the agent understands customer intent, prioritizes requests, and adapts tone and strategy based on the user’s response. 
Answer Readability Checks that responses are natural, professional, easy to read, and fully compliant with formatting and content rules
Human handoff accuracy Ensures the agent correctly flags when human intervention is required, such as for unanswered technical questions, legal/billing requests, meeting requests, or explicit requests for a human. 
Discovery question coverage Measures how effectively the agent qualifies leads using indirect, strategic discovery questions across Need, Budget, Authority, and Timeline

Each metric is scored independently on a 0–10 scale, where higher scores indicate stronger performance. We used an LLM-as-a-judge approach to score outputs against the ground truth and rubric and manually reviewed a sampled subset of evaluations to calibrate the judges and validate scoring consistency. To reduce judge variance and mitigate hallucination risk, each sample was evaluated five times, and the mean across runs was recorded as the final score. 

Benchmarking Strategy with ChatGPT 

To ensure an objective and fair comparison, we replicated a standard seller workflow in ChatGPT UI using GPT-4.1 with Pro license, a more advanced model than the GPT-4.1-mini variant currently used by SQA. 

Standard Prompting 

This setup simulates how a seller naturally interacts with a general-purpose LLM: 

  • High-level contextual instructions only 
  • Mirrors SQA’s autonomous research-to-outreach flow 

This ensures: 

  • Workflows remain representative and unbiased 
  • Comparisons reflect real-world usability, not prompt-engineering skill 
Identical Knowledge Sources and Context 

ChatGPT was given the exact same knowledge sources as SQA, including: 

  • Full lead information and seller value proposition 
  • Seller Q&A documentation via the SharePoint connector 
  • Historical conversation context for reply generation 

This isolates differences in agent reasoning and orchestration, not data access. 

Evaluation Results  

Microsoft evaluated the Sales Qualification Agent (SQA) and ChatGPT with over 300 leads, covering research, outreach, and qualification tasks with identical knowledge sources. Evaluations completed on December 4, 2025, showed that SQA consistently outperformed ChatGPT-4.

  • Research: SQA was 6% more effective at relevant, thorough company research. 
  • Outreach: SQA was 20% better at personalized communication and timely event references. 
  • Engagement: SQA scored 16% higher for precise responses and targeted qualifying questions. 

SQA also operates autonomously, reducing overhead and boosting pipeline quality for sales teams. 

Results by Task Category 

1. Company Research 

SQA was 6% better than ChatGPT, winning in its ability to perform more relevant and complete research that highlighted the lead company’s interest in the sellers offerings: 

  • SQA provided more relevant results: To ensure sellers spend their time on the most important leads, they need to determine whether a lead is good fit for their offerings. While both SQA and ChatGPT were given the same context (seller company and value proposition of the offerings), SQA consistently did better at tying its research back to this context, helping sellers determine fit. Appendix A shows an example where SQA was able to tie the company’s strategic priorities to its need for a collaboration platform and infer strong purchase ability from its robust operational health and minimal leverage burden.
  • SQA synthesized results with higher level of fidelity and completeness: The agent’s value is directly correlated to its ability to eliminate tedious work for the seller. SQA produced more detailed research synthesis (as demonstrated in Appendix A), giving a single, trusted source for the seller to get equipped with any insights they may need.  

These results stem from numerous experiments aimed at optimizing web research for the best outcomes at minimal cost, rather than relying on costly advanced models. Sellers get deeper insights with SQA’s agentic RAG for real-time reasoning with iterative web search results, combined with unique capabilities that increase data coverage, for example, auto-linking CRM records and extraction of company name from lead emails. 

2. Personalized Outreach 

SQA was 20% better than ChatGPT, notably ahead in the level of personalization and mentions of relevant recent events that will resonate with the lead. 

  • More personalized and customer-centricity: A lead is more likely to respond to a cold outreach email that directly explains how the seller’s offering can address their needs. SQA did so effectively by starting with the lead’s situation and recent events, while ChatGPT often focused on the seller and uses heavier technical jargon. A clear, actionable call to action bookends the email and guides the conversation forward. Appendix B shows an example of how SQA was able to tie a recent acquisition the lead’s company made to the value proposition of the seller’s offering. 

These results are based on direct engagement with sellers – every sales team that deploys SQA gives us precious feedback that all other customers benefit from.   

3. Qualification Conversations (Engage) 

SQA was 16% better than ChatGPT. It responded with greater precision to the lead’s questions to develop purchase interest and asked pointed discovery questions to better qualify the lead before handing off to a seller. 

  • Answers accurately by correctly understanding the lead’s intent and maintaining conversation context effectively. To drive deeper buyer consideration, SQA independently answered even the most technical questions that leads had about the seller’s offerings while maintaining the context from earlier messages in the simulated conversation, delivering clear, direct, and well-structured responses. Appendix C demonstrates SQA’s ability to pull the most relevant information from provided knowledge sources (in this case, files with technical specifications) during an ongoing conversation with a lead. 
  • Handles uncertainty responsibly, handing off to a supervisor/seller when appropriate. Both SQA and ChatGPT were instructed to handoff a lead to a supervising seller when a suitable response cannot be generated or when the lead is considered qualified as per pre-defined criteria. SQA handed off accurately and at the right moment in more tests than ChatGPT.  
  • Demonstrates strong discovery coverage. To maximize the value exchange from each follow-up conversation with the lead, SQA and ChatGPT were instructed to include discovery questions in their response to assess pre-configured qualification criteria (covering lead’s need, budget, buying authority and purchase timeline). SQA was able to ask pointed discovery questions to cover more of these criteria than ChatGPT in our simulated conversations. This resulted in SQA identifying and handing off better qualified leads through its engagement.

These gains are attributable to the hard lessons we have learnt through close collaboration with customers to understand the diversity of needs on intent detection and knowledge retrieval across AI agents for Dynamics 365 Sales, Service and Contact Center applications.  

Conclusion 

Sales Qualification Agent (SQA) is not a simple productivity tool—it is a complex multi-step agent directly influencing revenue outcomes. The Sales Qualification Bench represents a foundational step toward enterprise-grade trust, transparency, and continuous quality improvement for agentic AI in sales. 

Through rigorous, scenario-driven benchmarking, SQA demonstrated clear leadership across research, outreach, and qualification—while also surfacing concrete architectural and rule-quality investments for the next phase of growth. 

Upcoming plans for the Sales Qualification Bench include using the benchmark for continuous improvement of the Sales Qualification Agent, running further comparisons against a wider range of competitive offerings, and publishing the eval package so customers can reproduce it on their own data. Evaluation is not a one-time event. It is now embedded as a first-class engineering discipline in the SQA development lifecycle—ensuring the agent evolves safely, measurably, and in lockstep with seller needs. 

Appendix

Sample inputs and responses for each category  

Appendix A. Sample of research synthesis generated by SQA
Task 
Find and summarize business overview, strategic priorities, financial summary, and recent news. C1: Microsoft Teams C2: Agência Nacional de Águas e Saneamento Básico 
Research Response 
Overview Agência Nacional de Águas e Saneamento Básico (ANA) is a Brazilian federal regulatory agency in the public sector, focused on water resources management and basic sanitation. It operates nationwide with headquarters linked to the Ministry of Integration and Development. ANA is responsible for regulating the use of water resources, managing the security of dams, overseeing public irrigation, and establishing national standards for basic sanitation services. The agency serves primarily government entities and industry—a B2B context, as it sets legislation and frameworks for critical infrastructure projects. Its mission centers on sustainable water resource management for development and environmental security. This background is highly relevant to Microsoft Teams’ value proposition, as ANA’s nationwide role and collaboration among stakeholders (internal teams, other agencies, private partners) demand robust, secure digital communication and collaboration platforms like Teams, which would support ANA’s operations, regulatory projects, and document governance requirements.  

Strategic priorities ANA’s current strategic priorities include strengthening climate resilience, advancing national water governance, and digitalizing monitoring infrastructure to improve the management of Brazil’s water resources. The agency has prioritized participation in international programs, digital transformation in water resource management, and enhancing collaboration tools for more efficient regulatory processes. It seeks integrative technology for better data sharing, persistent project documentation, and cross-functional coordination among regulatory, legal, and operational teams. These needs align closely with Microsoft Teams’ capabilities: integrated communication, persistent chat channels, seamless meetings, and data governance tools are well-suited to ANA’s initiatives and pain points around digital collaboration, transparency, and security in government operations.  

Financial summary Agência Nacional de Águas e Saneamento Básico (ANA) demonstrates stable financial health as a federally funded autarchy, with annual revenues sourced exclusively from government allocations. Its updated budget stands at R$458.63 million, with R$416.56 million in current revenues and R$277.8 million in expenses paid. The agency has executed approximately 89% of its forecasted budget with minimal reliance on outside debts or capital markets, indicating low financial risk and cost-effective operations. For a seller like Microsoft Teams, ANA shows consistent funding and robust operational execution, supporting strong purchasing capacity for communication and collaboration products and services.  

News 
Agência Nacional de Águas e Saneamento Básico (ANA) launches new Digital Transformation Plan for 2025–2027: Published on 16/09/2025, ANA’s new Digital Transformation Plan prioritizes digital advancement for the next three years. This aligns with the value proposition of Microsoft Teams, as modernization initiatives may involve upgrading communication and collaboration infrastructure, making ANA potentially receptive to integrated solutions like Teams. 

ANA publishes revised Information and Communication Technology Master Plan (PDTIC) for 2024–2026: On 24 February 2025, ANA approved updates to its Master Plan for IT and Communication, emphasizing ongoing activities by its IT department. For Microsoft Teams, this is relevant as the procurement and deployment of communication platforms may form part of the plan’s priorities and budget.  

ANA releases 2025–2026 regulatory agenda with focus on digital processes and operational standards in water and sanitation: The new regulatory agenda, published 12/12/2024, includes digital governance, public-private partnerships, and operational standards for water and sanitization. Growing focus on digital solutions to support these agendas could signal interest in platforms that enable cross-team alignment, document management, and virtual collaboration, matching Microsoft Teams capabilities.  

Appendix B. Sample of an initial outreach email generated by SQA 

Task 
Write a cold sales outreach email to a lead. Don’t include a sign-off or signature in the email body. 
Lead ID: `d3f74f84-7e2b-4c16-9c37-919c2b8c2c6e` 
Responses 
Hi Akira, 

I noticed the Japan Meteorological Agency’s recent procurement notice for a cloud-based meteorological communication system, highlighting your focus on secure and efficient collaboration.  

Microsoft Teams can support JMA’s mission by consolidating chat, video calls, and file sharing into a single platform, enabling seamless cross-team collaboration on forecasts and critical data. This integrated approach aligns with your strategic priorities of disaster prevention and timely information sharing across multiple locations.  

Would you be available for a quick call in the next few days to discuss further?  
 

Appendix C. Sample of an engagement email generated by SQA 

Task 
Last Customer Reply “We’re constantly battling fragmented conversations—Slack channels get out of control, and switching to Teams for meetings is just more friction. Documents get lost between email threads and random cloud folders. Sales and marketing are rarely on the same page, especially when we’re chasing deals. Honestly, half the team just ends up texting when things get urgent. If you can solve that, I’m listening”  
Response 
Hi Liam, 

Thanks for sharing the specific challenges your team faces with fragmented conversations, switching between Slack and Teams, and document management. 

Microsoft Teams offers several features that directly address these issues: 
+ Unified chat and channels keep conversations organized by project or department, reducing fragmentation. 
+ Integrated meetings and calling allow seamless transitions from chat to video, minimizing friction between platforms. 
+ Secure file sharing and real-time co-authoring help prevent documents from getting lost across email threads and cloud folders. 
+ Deep integration with Microsoft 365 apps ensures sales and marketing teams stay aligned, with shared access to files and collaborative tools. 
+ External collaboration features allow you to work securely with guests and partners without switching accounts.  

To help tailor recommendations, could you share what budget range you have considered for improving your communication tools? Let me know if you’d like more details or have other questions about streamlining collaboration at CSU. 

The post Sales Qualification Agent: How we evaluated and improved AI quality with benchmarks appeared first on Microsoft Dynamics 365 Blog.

]]>
Beyond Retrieval: How an Agentic Approach Transforms Microsoft Dataverse Search  http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2025/12/08/agentic-ai-dataverse-search/ Mon, 08 Dec 2025 17:54:10 +0000 Leveraging recent breakthroughs in agentic AI, the new system delivers answers that are more relevant, contextual, and accurate to your specific business data. Think of it as an intelligent assistant that not only understands your question but figures out the best way to answer it using an adaptive reasoning process.

The post Beyond Retrieval: How an Agentic Approach Transforms Microsoft Dataverse Search  appeared first on Microsoft Dynamics 365 Blog.

]]>

Imagine being able to ask your CRM system a question like, “Which opportunities are likely to close this week?” or “Who has met with Ernie Kerrigan at Contoso recently?” and getting an instant, accurate answer without writing a single query or navigating through multiple Views in Dynamics 365.

Whether you’re using Copilot in Dynamics 365 Sales, Power Apps customized through Microsoft Copilot Studio or Microsoft 365 Copilot for Sales, under the hood, these experiences leverage one common engine: AI-powered Dataverse (DV) Search, which seamlessly connects business users to the underlying database schema, translating intent into action without requiring technical expertise. Thousands of enterprise customers already rely on this capability to power their business workflows.  

Figure 1: How AI-powered Dataverse Search Connects Copilot Experiences Across Dynamics 365 

We’ve reimagined the technology behind Dataverse Search from the ground up. Leveraging recent breakthroughs in agentic AI, the new system delivers answers that are more relevant, contextual, and accurate to your specific business data. Think of it as an intelligent assistant that not only understands your question but figures out the best way to answer it using an adaptive reasoning process.  

In this blog, we’ll explore why this agentic approach was necessary, how it works under the hood, and how it scales to enterprise needs supporting complex schemas, massive datasets, and domain specific terminologies while adhering to Microsoft Responsible AI principles. In particular, the agentic approach is model-agnostic, and while different models or fine-tuned models can influence the quality of results, the choice of model is orthogonal to the architecture. For this post, our emphasis remains on the agentic loop and its role in delivering dynamic, context-aware answers. Further, we will demonstrate our success via evaluation results and show you ways to customize it for your business. 

Queries to Conversations: Unlocking Your Live Business Data 

Every organization’s Dynamics 365 environment is unique, and most customers customize it extensively. Over time, these customizations lead to complex schemas, ambiguous relationships, and massive datasets spanning millions of records and terabytes of data. Our original Dataverse Search system was pioneering, but it relied on a fixed-plan natural language to SQL pipeline. A user’s question was converted to SQL through sequential stages: parsing, schema mapping, data linking, and SQL generation. This design was prone to cascading failure in a sequential pipeline. Each stage operated in isolation without shared context, so a single error could invalidate the entire query. Every question followed the same fixed flow, even when certain steps were unnecessary. This resulted in brittle behavior and suboptimal answers for complex or ambiguous queries that spanned multiple tables. 

We recognized the need for a more adaptable, resilient approach to tackle the complexities of enterprise data. This upgrade shifts DV Search beyond simple Search into intelligent, interactive conversations with your business data. For you, this translates into immediate, actionable value by providing:  

  • Real-Time, Actionable Answers: Ask, “Which of my open opportunities in New York are scheduled to close this month?” and get an instant answer from the live Dataverse data. This isn’t a report from last night’s data refresh; it’s the current state of your business. 
  • Democratized Data Access: A service manager can ask, “Show me active, high-priority cases that haven’t been updated in 3 days” without needing to understand the underlying table structure of incidents and case/activities. 
  • Deeper Contextual Conversations: The agent supports multi-turn conversations. After asking about opportunities in New York, you can follow up with, “Of those, which ones are for our ‘Pro’ license?” The agent remembers the context, providing a progressively refined answer. 

Under the Hood: Agentic Architecture 

To overcome some of the limitations of the earlier system and to meet the complex customer scenarios, the new DV Search architecture introduces an Agentic Orchestrator powered by GPT4.1. It transforms query handling from a static pipeline into a dynamic reasoning loop: plan → execute → refine. Instead of blindly converting text to SQL, the orchestrator treats each question as a goal, intelligently deciding the best steps to reach it. 

Figure 2: Agentic Architecture for AI-powered Dataverse Search 

Context Awareness and Conversations: When a user submits a new or follow-up question, a dedicated preprocessing component reviews prior conversation history and rewrites the query as a single, self-contained question, enabling coherent multi-turn conversations. For example, if you ask, “Show my top opportunities in Q4” and then follow up with “How about in Europe only?”. the component understands the second question is a refinement of the first rather than starting from scratch or losing track of prior context. This conversational capability makes interactions feel natural and efficient. The refined question is then enriched with the business’s domain knowledge (glossary) to fully reflect the user’s intent within the specific business context. 

Dynamic Planning and Execution: When the self-contained question comes in, the orchestrator doesn’t simply translate it into SQL. Instead, it breaks the query into logical steps and decides which tools to use and in what order, while also utilizing the domain knowledge encapsulated with the supplied glossaries. These tools include:  

  • schema_linking_tool: Schema lookup for understanding tables and relationships 
  • data_linking_tool: Semantic Search for finding relevant data values and resolving data ambiguities 
  • sql_execution_tool: SQL execution tool for retrieving results 
  • submit_plan_update_tool: Captures both the original plan and any course corrections made during execution 

The orchestrator adapts on the fly if the first attempt fails or returns incomplete results. It analyzes the issue, revises the plan, and retries. This self-correcting loop is a major improvement over older systems that suffered from cascading failures. 

Handling Relational Complexity: One of the most powerful aspects of this approach is its ability to handle relational complexity. Operational business application schemas often require multi-hop joins across multiple tables, including custom entities. The orchestrator understands these relationships and can navigate them intelligently, ensuring accurate joins and filters even in highly customized environments. For example, if a question involves linking Accounts to Opportunities and then to a custom Product table, the agent plans the steps and executes them seamlessly. 

Personalization and Learning: Personalization further enhances the experience. Over time, the system learns from usage patterns within your organization. If you frequently work with the Accounts table or use certain custom fields, the agent prioritizes those interpretations in future queries. This learning is based on interaction signals, not external data, and is carefully scoped to respect privacy and organizational boundaries. The result is a system that becomes more aligned with your business logic the more you use it. 

Real-World Example 

Imagine you run Fourth Coffee Machines, a business selling premium espresso and grinder units to commercial and residential customers. It’s managed through a Power App built on Dataverse. A seller begins with a simple keyword search in top-bar search in Power Apps for “Fourth Coffee” to confirm the account record. Thanks to fuzzy matching and relevance re-ranking, even typos like forth coffee or 4thcoffee surface the right entity instantly. 

From there, the seller asks Copilot: “Show me my open opportunity at risk with Fourth Coffee.” The agent rewrites the query, scopes it to the current user, interprets at risk as a cold rating, and joins Account → Opportunity. It executes SQL, returns the results, and summarizes them with citations—no manual filtering, no report building. 

Finally, the seller pivots to a KPI question: “What is the HRR for Coffee Grinder 02?” Here, the agent consults the business glossary, which defines HRR as Happy Response Rate (positive sentiment ÷ total reviews in the Product Review table). It computes the metric, explains the formula, and cites the source records. The user now understands exactly how the number was derived. 

Under the hood, this seamless experience is powered by an Agentic Orchestrator that plans, executes, and refines dynamically. It chooses the right tools, adapts when errors occur, and injects domain knowledge from glossaries. By combining dynamic planning, iterative refinement, relational understanding, and personalization, it represents a significant leap forward from static query pipelines. It’s not just about generating SQL it’s about orchestrating an intelligent, context-aware process that feels conversational and delivers real business value.  

Evaluation Results 

To measure how well our agentic system performs in practical enterprise scenarios, we evaluated it against curated datasets of user prompts each representing or assisting with a real job to be done. These prompts reflect the everyday questions and tasks that drive productivity for CRM users— from quick record lookups and aggregation analytics using keyword search or simple filters and joins, to complex multi-join queries requiring domain expertise. By categorizing prompts into levels of complexity, we ensure the evaluations capture the full spectrum of enterprise challenges.  

For each complexity level, we report two practical metrics: Relaxed Execution Accuracy (EX Accuracy) and P80 Latency. Relaxed Execution Accuracy measures how often the generated SQL returns the same rows as the reference SQL when both are executed on the same data—extra columns in the predicted query are allowed, but extra or missing rows are not; order is ignored unless ORDER BY is specified. P80 Latency is the 80th percentile end to end response time, from request receipt through retrieval, model inference, and verification to the final SQL result. Together, these metrics give a transparent, action-oriented view of correctness and responsiveness as task complexity increases. It highlights where the Agentic framework delivers reliable, efficient answers that empowers users to get more done with natural language.  

Complexity Level Description Prompt Distribution (%) EX Accuracy (Relaxed) P80 Latency (s) 
Level 1 Keyword Search 21% 96.2% 7.7s 
Level 2 NL Queries involving retrievals with filters and joins 28% 96.4% 7.5s 
Level 3 NL Queries requiring understanding of Domain knowledge, Customizations 51% 81.2% 10.6s 

† Metrics averaged over multiple runs 

In practice, higher accuracy often comes at the cost of increased latency. Conversely, pushing for low latency can reduce end to end quality. This Agentic system is designed to navigate that tradeoff, delivering strong accuracy while keeping latency within practical bounds. This achieves a practical balance for production use. 

Tuning for Your Business: Glossaries and Enriched Schema 

No AI system knows your business out-of-the-box. We’ve added tuning mechanisms that let makers refine how the Q&A agent understands your data: 

  • Glossaries: You can define a glossary to teach the agent your company’s unique vocabulary and acronyms. For example, if “QoQ” is common slang on your team for “quarter-over-quarter” or “CTX” refers to a particular set of products, you can add those to the glossary. The next time someone asks “What’s the QoQ growth for CTX?”, the agent will know exactly what that means. This helps align the AI with the lingo of your organization so it interprets queries the same way a knowledgeable employee would. 
  • Schema Descriptions: Dataverse allows adding custom descriptions to tables and columns. By populating these descriptions with meaningful info, you give the agent extra context. For instance, two fields might both be called “Status” – one on a custom entity and one on a standard entity. If you add descriptions like “(Order Status – custom)” vs “(System Status code)”, the agent can use that to pick the right field during SQL generation. Essentially, you’re able to clarify the semantics of your data model for the AI. 

Using the inherent metadata in Dataverse (like relationships and display names) plus these maker-driven additions, the agentic system can be tailored to use the correct terms and relationships in your domain, boosting accuracy even further. And because you control these glossaries and descriptions, you can continuously refine the AI’s understanding as your business evolves. 

Conclusion 

By reinventing Dataverse Search with an agentic architecture, we’ve moved from a rigid query engine to an adaptive, intelligent assistant for your business. The system understands nuance, handles ambiguity through reasoning, and even lets you inject your domain knowledge. Early adopters are seeing more questions answered correctly and faster than before, turning previously buried data into actionable insights. One leading global financial services company saw an Execution Accuracy surge from 22% to 97% on their marquee set of scenarios. This marks a significant step toward making enterprise data truly conversational. It empowers everyone from business users to power makers to tap into complex data and get the answers they need instantly and accurately, simply by asking. 

The post Beyond Retrieval: How an Agentic Approach Transforms Microsoft Dataverse Search  appeared first on Microsoft Dynamics 365 Blog.

]]>