Julie Strauss, Author at Microsoft Dynamics 365 Blog http://approjects.co.za/?big=en-us/dynamics-365/blog The future of agentic CRM and ERP Fri, 12 Dec 2025 18:29:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 http://approjects.co.za/?big=en-us/dynamics-365/blog/wp-content/uploads/2018/08/cropped-cropped-microsoft_logo_element.png Julie Strauss, Author at Microsoft Dynamics 365 Blog http://approjects.co.za/?big=en-us/dynamics-365/blog 32 32 .cloudblogs .cta-box>.link { font-size: 15px; font-weight: 600; display: inline-block; background: #008272; line-height: 1; text-transform: none; padding: 15px 20px; text-decoration: none; color: white; } .cloudblogs img { height: auto; } .cloudblogs img.alignright { float:right; } .cloudblogs img.alignleft { float:right; } .cloudblogs figcaption { padding: 9px; color: #737373; text-align: left; font-size: 13px; font-size: 1.3rem; } .cloudblogs .cta-box.-center { text-align: center; } .cloudblogs .cta-box.-left { padding: 20px 0; } .cloudblogs .cta-box.-right { padding: 20px 0; text-align:right; } .cloudblogs .cta-box { margin-top: 20px; margin-bottom: 20px; padding: 20px; } .cloudblogs .cta-box.-image { position:relative; } .cloudblogs .cta-box.-image>.link { position: absolute; top: auto; left: 50%; -webkit-transform: translate(-50%,0); transform: translate(-50%,0); bottom: 0; } .cloudblogs table { width: 100%; } .cloudblogs table tr { border-bottom: 1px solid #eee; padding: 8px 0; } ]]> Dynamics 365 sets the bar for agentic sales qualification on new benchmark http://approjects.co.za/?big=en-us/dynamics-365/blog/business-leader/2025/12/11/dynamics-365-sets-the-bar-for-agentic-sales-qualification-on-new-benchmark/ Thu, 11 Dec 2025 16:00:00 +0000 Announcing the Microsoft Sales Bench—a new collection of benchmarks designed to assess the performance of your AI-powered sales agents. Learn more.

The post Dynamics 365 sets the bar for agentic sales qualification on new benchmark appeared first on Microsoft Dynamics 365 Blog.

]]>

In October 2025, we announced the general availability of the Sales Qualification Agent (SQA) in Dynamics 365 Sales—a breakthrough in autonomous lead qualification. Sales Qualification Agent empowers sellers by helping build higher quality opportunity while eliminating tedious, repetitive work. Sales Qualification Agent autonomously researches every lead, initiates personalized outreach, and engages prospects to understand purchase intent, ensuring that sellers spend their time meeting prospects who are ready to take the next step. With modes enabling both seller-driven and fully autonomous qualification, the agent supports a key goal for sales organizations—increasing revenue per seller.

Customers are using Sales Qualification Agent in two ways: 

  1. Helping boost revenue beyond current sales capacity
    • Responding to inbound leads within minutes instead of days, increasing response rates and in turn, qualified opportunities.
    • Engaging leads that sellers are unable to follow up on due to capacity constraints, or those deemed economically unviable to pursue.
    • Increasing pipeline quality by focusing the seller’s time on a handful of high intent, engaged leads recommended by the agent.
  2. Helping reduce sales costs
    • Reducing back-office costs related to lead research and validation, using Sales Qualification Agent in “Research only” mode to hand-off only the leads that meet the ideal customer profile criteria.
    • Automatically disqualifying low-quality leads, saving hours of seller time during the week.

Continuing benchmarking the quality of sales AI agents

Microsoft is building the future of agentic Sales technology with prebuilt AI agents, such as Sales Qualification Agent, the Sales Research Agent, and the Sales Close Agent available in Dynamics 365.

At Microsoft, we’re committed to delivering quality, trust, and transparency with our agents, and that requires rigorous evaluation. As we continue to build new agents and improve existing ones for critical sales workflows, evaluation benchmarks provide a structured and transparent way for our customers to measure quality for the jobs the agent does.

Today, we’re announcing the Microsoft Sales Bench—a new collection of evaluation benchmarks designed to assess the performance of AI-powered sales agents across real-world scenarios. This framework brings together purpose-built metrics, hundreds of sales-specific scenarios, and composite scoring validated by both human and AI judges.

The Sales Bench isn’t starting from scratch. It now formalizes and expands what began with the Sales Research Bench, published on October 21, 2025, which evaluates how AI solutions answer business research questions for sales leaders.

Today, we’re extending the Microsoft Sales Bench with a second benchmark: the Microsoft Sales Qualification Bench, focused on measuring how effectively AI agents qualify leads and generate high-quality pipeline.

Introducing the Sales Qualification Bench for lead qualification

This Microsoft Sales Qualification Bench evolved from rigorous evaluations we conducted since the Sales Qualification Agent’s public preview in April, with the goal of objectively measuring quality as we further developed the agent, partnering with customers from a diverse set of industries. Since the preview, we measured every update against these standards, ensuring improvements are real and repeatable.

We generated a synthetic dataset modeled after companies from three different industries, with 300 leads, with attributes such as name, company, and email ID—representative of what sales teams typically work with before any enrichment or hygiene is performed. In addition to these typical attributes, we also added key knowledge inputs such as value proposition of the products being sold, customer case studies, and documentation for answering customer questions.

In addition to Sales Qualification Agent, we used the evaluation framework to measure ChatGPT by OpenAI on the same dataset. Since we didn’t have access to an autonomous agent from OpenAI, we mimicked how a human seller would use ChatGPT to recreate the three key jobs SQA performs. We provided each system—Sales Qualification Agent and ChatGPT—the exact same lead inputs, knowledge sources, and contextual signals under controlled evaluation configurations. We used a ChatGPT Pro license with GPT-4.1. This model is the closest match (and slightly better) to Sales Qualification Agent’s GPT-4.1 mini, which we intentionally chose to deliver optimal quality at lower cost per lead than newer models. Additionally, Pro license was chosen to optimize for quality: ChatGPT’s pricing page describes Pro as “full access to the best of ChatGPT.”1

The framework evaluates outputs from the three jobs across Sales Qualification Agent and ChatGPT:

  • Research: Company research for the lead—background, strategic priorities, financial health, and latest news.
  • Outreach: A personalized email generated based on research, to make initial contact with the lead.
  • Engagement: The agent’s conversation with a lead until it’s qualified or dispositioned.

Our scoring metrics span core quality (accuracy, relevance, completeness), trustworthiness (grounding and citations), and business-specific success criteria (e.g., relevancy of company research to highlight interest in the seller’s offerings, personalization of the initial outreach emails sent to catch the lead’s attention, accuracy of responses to the lead’s questions to drive purchase intent, and the timing of handoff to a seller when the lead is ready to engage).

Outputs were scored independently by both human reviewers and an LLM judge built with GPT-5.1, using a 1–10 scale for each metric. These metric-specific scores were then rolled up using a simple average to produce a composite quality score. The result is a rigorous benchmark presenting a composite score and dimension-specific scores to reveal where agents excel or need improvement. Our methodology, metrics, and their definitions are described in this technical blog.

Results

In evaluations completed on December 4, 2025, using the Sales Qualification Bench, Sales Qualification Agent outperformed ChatGPT on each of the three jobs required for sales qualification:

  1. Research: The Sales Qualification Agent outperformed ChatGPT with 6% higher aggregate scores, leading on relevancy and completeness in research results that highlighted the lead company’s interest in the seller’s offerings.
  2. Outreach: Sales Qualification Agent demonstrated 20% better results compared to ChatGPT, generating email drafts with accurate personalization and mentions of relevant recent events that will resonate with the lead.
  3. Engagement: Sales Qualification Agent’s email responses to engage a lead over a multi-turn conversation scored 16% higher than ChatGPT’s. SQA generated emails that responded to the lead’s questions with accurate answers that develop their purchase interest and with precise discovery questions that qualify the lead before handing off to a seller.

In addition to performing better on these metrics, Sales Qualification Agent has the ability to run autonomously, which can help significantly reduce the time spent generating pipeline while helping sales teams build better quality pipeline.

Sales Qualification Agent scores well on these three jobs as its optimized for sales-specific scenarios and uses the following techniques to get great results:

  1. It uses agentic Retrieval Augmented Generation (RAG) to relentlessly research each lead, ensuring greater completeness. More on this in the following section.
  2. With knowledge of what the company sells, it can contextualize every workflow to increase relevancy for both the seller and the lead.
  3. It can retrieve organizational knowledge from attached documents and internal repositories like SharePoint with greater precision, boosting accuracy of its responses when engaging with the lead.

The technical blog details which metrics SQA excels at relative to ChatGPT, where it falls short, and why.

Translating evals to real-world impact

Running evals led to major Sales Qualification Agent improvements during its six-month preview. Early results prompted us to try agentic AI design patterns, especially agentic RAG, which improved our company research by allowing iterative web searches and real-time reasoning. They also led us to enhance data coverage by auto-linking existing CRM records to each lead and inferring company names from lead emails. These updates provided sellers with deeper insights, revealing strategic opportunities and risks beyond basic facts.

For instance, when researching leads for a security company, Sales Qualification Agent can link news on recent cyberattacks to increased demand for its software. As highlighted in the technical blog, research synthesized by the agent makes such inferences more consistently than ChatGPT. Enhancing the agent’s research also improved the relevance and personalization of outreach emails, helping agents better engage leads and clarify their ability and intent to purchase before handing them off to sellers.

Sandvik Coromant, a leader in precision cutting tools, partnered with us to pilot Sales Qualification Agent for their Digital Commerce program. After the updates, Pia Cedendahl, Global Sales Manager for Strategic Channels/Partners and Online Sales, noted, “Sales Qualification Agent’s answers became far more on-point to our business—it’s like having a research assistant that already understands what we care about.” Sandvik Coromant saw improved lead conversion and higher engagement from their Digital Account Managers, validating the impact of our evaluation-driven approach. Pia joined Microsoft leaders at the Microsoft Ignite 2025 session, “Accelerate revenue and seller productivity with agentic CRM,” where she shared how the team saved more than 120 hours and $19,000 in just the first three weeks since launching a pilot, and forecasted a 5% increase in revenue with full rollout.

Better insights, more personalization, proven value

Equipped with agentic AI design and backed by data-driven evaluation, customers can confidently use the Sales Qualification Agents so that:

  • Sellers receive comprehensive company overviews, timely news highlights, and actionable recommendations that are consistently delivered with high quality—drawing a clear line from insight to action.
  • Sales leaders can expand their qualified pipeline cost efficiently, with the agent ensuring high lead quality.
  • Prospects benefit from more personalized outreach, enhancing their experience and supporting increased conversion rates.

What’s next

We’ll continue to refine Sales Qualification Agent using agentic design patterns, aiming to make every improvement measurable and meaningful. Stay tuned for the full evaluation results and methodology for the Sales Qualification Bench, which will be published for transparency and reproducibility. We also intend to add more evaluation frameworks and benchmarks to the Microsoft Sales Bench collection including benchmarks that cover future sales agent capabilities.


1ChatGPT pricing page, accessed November 24, 2025

The post Dynamics 365 sets the bar for agentic sales qualification on new benchmark appeared first on Microsoft Dynamics 365 Blog.

]]>
Elevating Sales Performance with Microsoft’s Sales Research Agent: How Rigorous Evaluation Unlocks Trust and Transformation http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2025/10/21/sales-research-bench/ Tue, 21 Oct 2025 14:50:00 +0000 The Sales Research Agent in Dynamics 365 Sales automatically connects to live CRM data and can connect to additional data stored elsewhere, such as budgets and targets. It reasons over complex, even customized schemas with deep domain expertise, and presents novel, decision-ready insights through text-based narratives and rich data visualizations tailored to the business question at hand.

The post Elevating Sales Performance with Microsoft’s Sales Research Agent: How Rigorous Evaluation Unlocks Trust and Transformation appeared first on Microsoft Dynamics 365 Blog.

]]>

In today’s hyper-competitive business landscape, sales leaders face a relentless challenge: how to drive growth, outpace competitors, and make smarter decisions faster in a resource constrained environment. Thankfully, the promise of AI in sales is no longer theoretical. With the advent of agentic solutions embedded in Microsoft Dynamics 365 Sales, including the Sales Research Agent, organizations are witnessing a transformation in how business decisions are made, and teams are empowered. But how do you know if these breakthrough technologies have reached a level of quality where you can trust them to support business-critical decisions?

Today, I’m excited to share an update on the Sales Research Agent, in public preview as of October 1, as well as a new evaluation benchmark, the Microsoft Sales Research Bench, created to assess how AI solutions respond to the strategic, multi-faceted questions that sales leaders have about their business and operational performance. We intend to publish the full evaluation package behind the Sales Research Bench in the coming months so that others can run these evals on different AI solutions themselves.

The New Frontier: AI Research Agents in Sales

Sales Research Agent in Dynamics 365 Sales empowers business leaders to explore complex business questions through natural language conversations with their data. It leverages a multi-modal, multi-model, and multi-agent architecture to reason over intricate, customized schemas with deep sales domain expertise. The agent delivers novel, decision-ready insights through narrative explanations and rich visualizations tailored to the specific business context.

For sales leaders, this means the ability to self-serve on real-time trustworthy analysis, spanning CRM and other domains, which previously took many people days or weeks to compile, with access to deeper insights enabled by the power of AI on pipeline, revenue attainment, and other critical topics.

Image: Screenshot of the Sales Research Agent in Dynamics 365 Sales

Screenshot of Sales Research Bench

As a product manager in the sales domain, balancing deep data analysis with timely insights is a constant challenge. The pace of changing market dynamics demands a new way to think about go-to-market tactics. With the Sales Research Agent, we’re excited to bridge the gap between traditional and time-intensive reporting and real-time, AI-assisted analysis — complementing our existing tools and setting a new standard for understanding sales data.

Kris Kuty, EY LLP
Clients & Industries — Digital Engagement, Account, and Sales Excellence Lead


What makes the Sales Research Agent so unique? 

  • Its turnkey experience goes beyond the standard AI chat interface to provide a complete user experience with text narratives and data visualizations tailored for business research and compatible with a sales leader’s natural business language.  
  • As part of Dynamics 365 Sales, it automatically connects to your CRM data and applies schema intelligence to your customizations, with the deep understanding of your business logic and the sales domain that you’d expect a business application to have. 
  • Its multi-agent, multi-model architecture enables the Sales Research Agent to build out a dedicated research plan and to delegate each task to specialized agents, using the model best suited for the task at hand.   
  • Before the agent shares its business assessment and analysis, it critiques its work for quality. If the output does not meet the agent’s own quality bar, it will revise its work. 
  • The agent explains how it arrived at its answers using simple language for business users and showing SQL queries for technical users, enabling customers to quickly verify its accuracy. 

Why Verifiable Quality Matters

Seemingly every day a new AI tool shows up. The market is crowded with offers that may or may not deliver acceptable levels of quality to support business decisions. How do you know what’s truly enterprise ready? To help make sure business leaders do not have to rely on anecdotal evidence or “gut feel”, any vendor providing AI solutions needs to earn trust through clear, repeatable metrics that demonstrate quality, showing where the AI excels, where it needs improvement, and how it stacks up against alternatives.

While there is a wide range of pioneering work on AI evaluation, enterprises deserve benchmarks that are purpose-built for their needs. Existing benchmarks don’t reflect 1) the strategic, multi-faceted questions of sales leaders using their natural business language; 2) the importance of schema accuracy; or 3) the value of quality across text and visualizations. That is why we are introducing the Sales Research Bench.

Introducing Sales Research Bench: The Benchmark for AI-powered Sales Research

Inspired by groundbreaking work in AI Benchmarks such as TBFact and RadFact, Microsoft developed the Sales Research Bench to assess how AI solutions respond to the business research questions that sales leaders have about their business data.1

Read this blog post for a detailed explanation of the Sales Research Bench methodology as well as the Sales Research Agent’s architecture.

This benchmark is based on our customers’ real-life experiences and priorities. From engagements with customer sales teams across industries and around the world, Microsoft created 200 real-world business questions in the language sales leaders use and identified 8 critical dimensions of quality spanning accuracy, relevance, clarity, and explainability. The data schema on which the evaluations take place is customized to reflect the complexities of our customers’ enterprise environments, with their layered business logic and nuanced operational realities.

To illustrate, here are 3 of our 200 evaluation questions informed by real sales leader questions:
  1. Looking at closed opportunities, which sellers have the largest gap between Total Actual Sales and Est Value First Year in the ‘Corporate Offices’ Business Segment?
  2. Are our sales efforts concentrated on specific industries or spread evenly across industries?
  3. Compared to my headcount on paper (30), how many people are actually in seat and generating pipeline?

Judging is handled by LLM evaluators that rate an AI solution’s responses (text and data visualizations) against each quality dimension on a 100-point scale based on specific guidelines (e.g., give score of 100 for chart clarity if the chart is crisp and well labeled, score of 20 if the chart is unreadable, misleading). These dimension-specific scores are then weighted to produce a composite quality score, with the weights defined based on qualitative input from customers, what we have heard customers say they value most. The result is a rigorous benchmark presenting a composite score and dimension-specific scores to reveal where agents excel or need improvement.[2]

[1] For more on TBFact: Towards Robust Evaluation of Multi-Agent Systems in Clinical Settings | Microsoft Community Hub and for more on RadFact: [2406.04449] MAIRA-2: Grounded Radiology Report Generation

[2] Sales Research Bench uses Azure Foundry’s out-of-box LLM evaluators for the dimensions of Text Groundedness and Text Relevance. The other 6 dimensions each have a custom LLM evaluator that leverages Open AI’s GPT 4.1 model. 100-pt scale has 100 as the highest score with 20 as the lowest. More details on the benchmark methodology are provided here

Running Sales Research Bench on AI solutions

Here’s how we applied the Sales Research Bench to run evaluations on the Sales Research Agent, ChatGPT by OpenAI, and Claude by Anthropic.  

  • License: Microsoft evaluated ChatGPT by OpenAI using a Pro license with GPT-5 in Auto mode and Claude Sonnet 4.5 by Anthropic using a Max license. The licenses were chosen to optimize for quality: ChatGPT’s pricing page describes Pro as “full access to the best of ChatGPT,” while Claude’s pricing page recommends Max to “get the most out of Claude.”3 Similarly, ChatGPT’s evaluation was run using Auto mode, a setting that allows ChatGPT’s system to determine the best-suited model variant for each prompt.  
  • Questions: All agents were given the same 200 business questions.  
  • Instructions: ChatGPT and Claude were given explicit instructions to create charts and to explain how they got to their answers. (Equivalent instructions are included in the Sales Research Agent out of box.) 
  • Data: ChatGPT and Claude accessed the sample dataset in an Azure SQL instance exposed through the MCP SQL connector. The Sales Research Agent connects to the sample dataset in Dynamics 365 Sales out of box.  

3ChatGPT Pricing and Pricing | Claude, both accessed on October 19, 2025

Results are in: Sales Research Agent vs. alternative offerings

In head-to-head evaluations completed on October 19, 2025 using the Sales Research Bench framework, the Sales Research Agent outperformed Claude Sonnet 4.5 by 13 points and ChatGPT-5 by 24.1 points on a 100-point scale.

Image: Sales Research Agent – Evaluation Results

Microsoft Sales Research Bench - Composite Scores

1Results: Results reflect testing completed on October 19, 2025, applying the Sales Research Bench methodology to evaluate Microsoft’s Sales Research Agent (part of Dynamics 365 Sales), ChatGPT by OpenAI using a ChatGPT Pro license with GPT-5 in Auto mode, and Claude Sonnet 4.5 by Anthropic using a Claude Max license.

Methodology and Evaluation dimensions: Sales Research Bench includes 200 business research questions relevant to sales leaders that were run on a sample customized data schema. Each AI solution was given access to the sample dataset using different access mechanisms that aligned with their architecture. Each AI solution was judged by LLM judges for the responses the solution generated to each business question, including text and data visualizations.

We evaluated quality based on 8 dimensions, weighting each according to qualitative input from customers, what we have heard customers say they value most in AI tools for sales research: Text Groundedness (25%), Chart Groundedness (25%), Text Relevance (13%), Explainability (12%), Schema Accuracy (10%), Chart Relevance (5%), Chart Fit (5%), and Chart Clarity (5%). Each of these dimensions received a score from an LLM judge from 20 as the worst rating to 100 as the best. For example, the LLM judge would give a score of 100 for chart clarity if the chart is crisp and well labeled, score of 20 if the chart is unreadable or misleading. Text Groundedness and Text Relevance used Azure Foundry’s out-of-box LLM evaluators, while judging for the other six dimensions leveraged Open AI’s GPT 4.1 model with specific guidance. A total composite score was calculated as a weighted average from the 8 dimension-specific scores. More details on the methodology can be found in this blog.

The Sales Research Agent outperformed these solutions on each of the 8 quality dimensions. 

Image: Evaluation Scores for Each of the Eight Dimensions

Microsoft Sales Research Bench - Dimension specific scores

The Road Ahead: Investing in Benchmarks

Upcoming plans for the Sales Research Bench include using the benchmark for continuous improvement of the Sales Research Agent, running comparisons against a wider range of competitive offerings, and publishing the full evaluation package including all 200 questions and the sample dataset in the coming months, so that others can run it themselves to verify the published results and benchmark the agents they use. Evaluation is not a one-time event. Scores can be tracked across releases, domains, and datasets, driving targeted quality improvements and ensuring the AI evolves with your business.

Sales Research Bench is just the beginning. Microsoft plans to develop eval frameworks and benchmarks for more business functions and agentic solutions—in customer service, finance, and beyond. The goal is to set a new standard for trust and transparency in enterprise AI.

Why This Matters for Sales Leaders

For business decision makers, the implications are profound:

  • Accelerated Decision-Making: AI-driven insights you can trust, when delivered in real time, enable faster, more confident decisions
  • Continuous Improvement: Thanks to evals, developers can quickly identify areas for highest measurable impact and focus improvement efforts there
  • Trust and Transparency: Rigorous evaluation means you can rely on the outputs, knowing they’ve been tested against the scenarios that matter most to your business.

The future of sales is agentic, data-driven, and relentlessly focused on quality. With Microsoft’s Sales Research Agent and the Sales Research Bench evaluation framework, sales leaders can move beyond hype and make decisions grounded in demonstration of quality. It’s not just about having the smartest AI—it’s about having a trustworthy partner for your business transformation.

 

The post Elevating Sales Performance with Microsoft’s Sales Research Agent: How Rigorous Evaluation Unlocks Trust and Transformation appeared first on Microsoft Dynamics 365 Blog.

]]>
Introducing Project “Sophia”, a new generation AI-first business application http://approjects.co.za/?big=en-us/dynamics-365/blog/it-professional/2023/11/16/introducing-project-sophia/ Thu, 16 Nov 2023 16:00:00 +0000 http://approjects.co.za/?big=en-us/dynamics-365/blog/?p=187968 Project Sophia is designed to help inform new innovations we can bring to our customers across our Microsoft applications portfolio.

The post Introducing Project “Sophia”, a new generation AI-first business application appeared first on Microsoft Dynamics 365 Blog.

]]>

We are committed to continuous innovation to reimagine business applications in this era of AI. Today we are excited to announce the preview of Project “Sophia”, which you can try at: Http://aka.ms/projectsophia. Project “Sophia” is an AI powered business research canvas designed to help all business users solve complex, cross-domain business problems. It enables users to discover, visualize, and interact with data in new ways, to optimize business processes, and answer strategic questions that drive better outcomes.  

Research Journeys to explore cross-functional data and insights to find innovative solutions  

As an AI Powered Business Research canvas. you can ask Project “Sophia” any business question across every business domain in your organization. You start by uploading data you want help exploring or you can simply ask a question, and Project “Sophia” will start a research journey for you. With Project “Sophia”, you have access to your own ‘digital analyst’ as well as access to the power of rich domain expertise across departments. You can scale with a focus and intelligence around your core business processes to effortlessly research where optimizations can be created in these journeys in a matter of minutes or hours vs. days or months.   

Magically generate rich user experiences and provide actionable recommendations

Project “Sophia” automatically generates what we call blueprints, which are information rich building blocks, designed to help provide structure to your AI powered research making it easier to navigate. Every blueprint contains a textual overview, visual representations of insights and a range of suggested next actions. Using the AI cursor, a new innovative, fully contextual chat experience, you can dive deeper into any areas of the research journey, triggering a conversation with “Sophia” who will now assist with further explorations and suggestions. Blueprints, insights, next steps and AI Cursor interactions are all generated using the power of large language models. 

Achieve specific outcomes for high value business tasks using Business Process Guides

Business Process Guides are experiences where Project “Sophia” guides you through the process of achieving a predefined outcome for a specific high value business task. The first Business Process Guide that is supported in this Preview release is Account Planning or Sales Territory Planning.  

When you upload data about accounts, sales reps, pipeline and other account relevant information Project “Sophia” naturally detects relationships between these different data points – even if it exists across different files – and creates a comprehensive AI generated account plan for you.  This serves as a starting point with the idea to indicate and suggest other data that can help you make an account plan that is even more comprehensive and actionable. 

Try “Sophia” today

We will continue to innovate and add more capabilities to Project “Sophia” based on customer feedback during this preview phase. 

To learn more and get started with the preview, visit http://aka.ms/ProjectSophia. There, you can watch an overview session, see a demo of the AI in action and even try the preview for yourself.  We would love to hear from you and learn more about your thoughts. 

The post Introducing Project “Sophia”, a new generation AI-first business application appeared first on Microsoft Dynamics 365 Blog.

]]>